00:00:00.000 Started by upstream project "autotest-per-patch" build number 132784 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.125 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.127 The recommended git tool is: git 00:00:00.127 using credential 00000000-0000-0000-0000-000000000002 00:00:00.129 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.163 Fetching changes from the remote Git repository 00:00:00.167 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.208 Using shallow fetch with depth 1 00:00:00.208 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.208 > git --version # timeout=10 00:00:00.248 > git --version # 'git version 2.39.2' 00:00:00.248 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.275 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.275 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.726 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.737 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.750 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.750 > git config core.sparsecheckout # timeout=10 00:00:06.761 > git read-tree -mu HEAD # timeout=10 00:00:06.778 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.801 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.801 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.991 [Pipeline] Start of Pipeline 00:00:07.001 [Pipeline] library 00:00:07.002 Loading library shm_lib@master 00:00:07.002 Library shm_lib@master is cached. Copying from home. 00:00:07.016 [Pipeline] node 00:00:22.018 Still waiting to schedule task 00:00:22.018 Waiting for next available executor on ‘vagrant-vm-host’ 00:05:31.288 Running on VM-host-SM16 in /var/jenkins/workspace/nvme-vg-autotest_3 00:05:31.291 [Pipeline] { 00:05:31.303 [Pipeline] catchError 00:05:31.305 [Pipeline] { 00:05:31.322 [Pipeline] wrap 00:05:31.333 [Pipeline] { 00:05:31.378 [Pipeline] stage 00:05:31.380 [Pipeline] { (Prologue) 00:05:31.399 [Pipeline] echo 00:05:31.401 Node: VM-host-SM16 00:05:31.407 [Pipeline] cleanWs 00:05:31.416 [WS-CLEANUP] Deleting project workspace... 00:05:31.416 [WS-CLEANUP] Deferred wipeout is used... 00:05:31.422 [WS-CLEANUP] done 00:05:31.661 [Pipeline] setCustomBuildProperty 00:05:31.779 [Pipeline] httpRequest 00:05:32.172 [Pipeline] echo 00:05:32.173 Sorcerer 10.211.164.112 is alive 00:05:32.183 [Pipeline] retry 00:05:32.185 [Pipeline] { 00:05:32.197 [Pipeline] httpRequest 00:05:32.201 HttpMethod: GET 00:05:32.202 URL: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:05:32.202 Sending request to url: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:05:32.205 Response Code: HTTP/1.1 200 OK 00:05:32.206 Success: Status code 200 is in the accepted range: 200,404 00:05:32.206 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_3/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:05:32.351 [Pipeline] } 00:05:32.368 [Pipeline] // retry 00:05:32.376 [Pipeline] sh 00:05:32.656 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:05:32.671 [Pipeline] httpRequest 00:05:33.060 [Pipeline] echo 00:05:33.062 Sorcerer 10.211.164.112 is alive 00:05:33.073 [Pipeline] retry 00:05:33.075 [Pipeline] { 00:05:33.090 [Pipeline] httpRequest 00:05:33.095 HttpMethod: GET 00:05:33.095 URL: http://10.211.164.112/packages/spdk_b4f857a04df76242552d961ed9f9f1590167df2f.tar.gz 00:05:33.097 Sending request to url: http://10.211.164.112/packages/spdk_b4f857a04df76242552d961ed9f9f1590167df2f.tar.gz 00:05:33.099 Response Code: HTTP/1.1 200 OK 00:05:33.100 Success: Status code 200 is in the accepted range: 200,404 00:05:33.100 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_3/spdk_b4f857a04df76242552d961ed9f9f1590167df2f.tar.gz 00:05:36.085 [Pipeline] } 00:05:36.105 [Pipeline] // retry 00:05:36.113 [Pipeline] sh 00:05:36.394 + tar --no-same-owner -xf spdk_b4f857a04df76242552d961ed9f9f1590167df2f.tar.gz 00:05:40.624 [Pipeline] sh 00:05:40.903 + git -C spdk log --oneline -n5 00:05:40.904 b4f857a04 env: add mem_map_fini and vtophys_fini for cleanup 00:05:40.904 3fe025922 env: handle possible DPDK errors in mem_map_init 00:05:40.904 b71c8b8dd env: explicitly set --legacy-mem flag in no hugepages mode 00:05:40.904 496bfd677 env: match legacy mem mode config with DPDK 00:05:40.904 a2f5e1c2d blob: don't free bs when spdk_bs_destroy/spdk_bs_unload fails 00:05:40.921 [Pipeline] writeFile 00:05:40.937 [Pipeline] sh 00:05:41.219 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:05:41.230 [Pipeline] sh 00:05:41.509 + cat autorun-spdk.conf 00:05:41.509 SPDK_RUN_FUNCTIONAL_TEST=1 00:05:41.509 SPDK_TEST_NVME=1 00:05:41.509 SPDK_TEST_FTL=1 00:05:41.509 SPDK_TEST_ISAL=1 00:05:41.509 SPDK_RUN_ASAN=1 00:05:41.509 SPDK_RUN_UBSAN=1 00:05:41.509 SPDK_TEST_XNVME=1 00:05:41.509 SPDK_TEST_NVME_FDP=1 00:05:41.509 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:41.515 RUN_NIGHTLY=0 00:05:41.517 [Pipeline] } 00:05:41.530 [Pipeline] // stage 00:05:41.544 [Pipeline] stage 00:05:41.546 [Pipeline] { (Run VM) 00:05:41.558 [Pipeline] sh 00:05:41.837 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:05:41.837 + echo 'Start stage prepare_nvme.sh' 00:05:41.837 Start stage prepare_nvme.sh 00:05:41.837 + [[ -n 6 ]] 00:05:41.837 + disk_prefix=ex6 00:05:41.837 + [[ -n /var/jenkins/workspace/nvme-vg-autotest_3 ]] 00:05:41.837 + [[ -e /var/jenkins/workspace/nvme-vg-autotest_3/autorun-spdk.conf ]] 00:05:41.837 + source /var/jenkins/workspace/nvme-vg-autotest_3/autorun-spdk.conf 00:05:41.837 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:41.837 ++ SPDK_TEST_NVME=1 00:05:41.837 ++ SPDK_TEST_FTL=1 00:05:41.837 ++ SPDK_TEST_ISAL=1 00:05:41.837 ++ SPDK_RUN_ASAN=1 00:05:41.837 ++ SPDK_RUN_UBSAN=1 00:05:41.837 ++ SPDK_TEST_XNVME=1 00:05:41.837 ++ SPDK_TEST_NVME_FDP=1 00:05:41.837 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:41.837 ++ RUN_NIGHTLY=0 00:05:41.837 + cd /var/jenkins/workspace/nvme-vg-autotest_3 00:05:41.837 + nvme_files=() 00:05:41.837 + declare -A nvme_files 00:05:41.837 + backend_dir=/var/lib/libvirt/images/backends 00:05:41.837 + nvme_files['nvme.img']=5G 00:05:41.837 + nvme_files['nvme-cmb.img']=5G 00:05:41.837 + nvme_files['nvme-multi0.img']=4G 00:05:41.837 + nvme_files['nvme-multi1.img']=4G 00:05:41.837 + nvme_files['nvme-multi2.img']=4G 00:05:41.837 + nvme_files['nvme-openstack.img']=8G 00:05:41.837 + nvme_files['nvme-zns.img']=5G 00:05:41.837 + (( SPDK_TEST_NVME_PMR == 1 )) 00:05:41.837 + (( SPDK_TEST_FTL == 1 )) 00:05:41.837 + nvme_files["nvme-ftl.img"]=6G 00:05:41.837 + (( SPDK_TEST_NVME_FDP == 1 )) 00:05:41.837 + nvme_files["nvme-fdp.img"]=1G 00:05:41.837 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:05:41.837 + for nvme in "${!nvme_files[@]}" 00:05:41.837 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi2.img -s 4G 00:05:41.837 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:05:41.837 + for nvme in "${!nvme_files[@]}" 00:05:41.837 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-ftl.img -s 6G 00:05:41.837 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:05:41.837 + for nvme in "${!nvme_files[@]}" 00:05:41.837 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-cmb.img -s 5G 00:05:41.837 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:05:41.837 + for nvme in "${!nvme_files[@]}" 00:05:41.837 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-openstack.img -s 8G 00:05:41.837 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:05:42.095 + for nvme in "${!nvme_files[@]}" 00:05:42.095 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-zns.img -s 5G 00:05:42.660 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:05:42.660 + for nvme in "${!nvme_files[@]}" 00:05:42.660 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi1.img -s 4G 00:05:42.660 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:05:42.660 + for nvme in "${!nvme_files[@]}" 00:05:42.660 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi0.img -s 4G 00:05:42.660 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:05:42.660 + for nvme in "${!nvme_files[@]}" 00:05:42.660 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-fdp.img -s 1G 00:05:42.918 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:05:42.918 + for nvme in "${!nvme_files[@]}" 00:05:42.918 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme.img -s 5G 00:05:43.483 Formatting '/var/lib/libvirt/images/backends/ex6-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:05:43.483 ++ sudo grep -rl ex6-nvme.img /etc/libvirt/qemu 00:05:43.483 + echo 'End stage prepare_nvme.sh' 00:05:43.483 End stage prepare_nvme.sh 00:05:43.495 [Pipeline] sh 00:05:43.774 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:05:43.774 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex6-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex6-nvme.img -b /var/lib/libvirt/images/backends/ex6-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex6-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:05:43.774 00:05:43.774 DIR=/var/jenkins/workspace/nvme-vg-autotest_3/spdk/scripts/vagrant 00:05:43.774 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest_3/spdk 00:05:43.774 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest_3 00:05:43.774 HELP=0 00:05:43.774 DRY_RUN=0 00:05:43.774 NVME_FILE=/var/lib/libvirt/images/backends/ex6-nvme-ftl.img,/var/lib/libvirt/images/backends/ex6-nvme.img,/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,/var/lib/libvirt/images/backends/ex6-nvme-fdp.img, 00:05:43.774 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:05:43.774 NVME_AUTO_CREATE=0 00:05:43.774 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,, 00:05:43.774 NVME_CMB=,,,, 00:05:43.774 NVME_PMR=,,,, 00:05:43.774 NVME_ZNS=,,,, 00:05:43.774 NVME_MS=true,,,, 00:05:43.774 NVME_FDP=,,,on, 00:05:43.774 SPDK_VAGRANT_DISTRO=fedora39 00:05:43.774 SPDK_VAGRANT_VMCPU=10 00:05:43.774 SPDK_VAGRANT_VMRAM=12288 00:05:43.774 SPDK_VAGRANT_PROVIDER=libvirt 00:05:43.774 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:05:43.774 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:05:43.774 SPDK_OPENSTACK_NETWORK=0 00:05:43.774 VAGRANT_PACKAGE_BOX=0 00:05:43.774 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest_3/spdk/scripts/vagrant/Vagrantfile 00:05:43.774 FORCE_DISTRO=true 00:05:43.774 VAGRANT_BOX_VERSION= 00:05:43.774 EXTRA_VAGRANTFILES= 00:05:43.774 NIC_MODEL=e1000 00:05:43.774 00:05:43.774 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest_3/fedora39-libvirt' 00:05:43.774 /var/jenkins/workspace/nvme-vg-autotest_3/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest_3 00:05:47.054 Bringing machine 'default' up with 'libvirt' provider... 00:05:47.989 ==> default: Creating image (snapshot of base box volume). 00:05:47.989 ==> default: Creating domain with the following settings... 00:05:47.989 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733738238_ff0a1714197ea14df57f 00:05:47.989 ==> default: -- Domain type: kvm 00:05:47.989 ==> default: -- Cpus: 10 00:05:47.989 ==> default: -- Feature: acpi 00:05:47.989 ==> default: -- Feature: apic 00:05:47.989 ==> default: -- Feature: pae 00:05:47.989 ==> default: -- Memory: 12288M 00:05:47.989 ==> default: -- Memory Backing: hugepages: 00:05:47.989 ==> default: -- Management MAC: 00:05:47.989 ==> default: -- Loader: 00:05:47.989 ==> default: -- Nvram: 00:05:47.989 ==> default: -- Base box: spdk/fedora39 00:05:47.989 ==> default: -- Storage pool: default 00:05:47.989 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733738238_ff0a1714197ea14df57f.img (20G) 00:05:47.989 ==> default: -- Volume Cache: default 00:05:47.989 ==> default: -- Kernel: 00:05:47.989 ==> default: -- Initrd: 00:05:47.989 ==> default: -- Graphics Type: vnc 00:05:47.989 ==> default: -- Graphics Port: -1 00:05:47.989 ==> default: -- Graphics IP: 127.0.0.1 00:05:47.989 ==> default: -- Graphics Password: Not defined 00:05:47.989 ==> default: -- Video Type: cirrus 00:05:47.989 ==> default: -- Video VRAM: 9216 00:05:47.989 ==> default: -- Sound Type: 00:05:47.989 ==> default: -- Keymap: en-us 00:05:47.989 ==> default: -- TPM Path: 00:05:47.989 ==> default: -- INPUT: type=mouse, bus=ps2 00:05:47.989 ==> default: -- Command line args: 00:05:47.989 ==> default: -> value=-device, 00:05:47.989 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:05:47.989 ==> default: -> value=-drive, 00:05:47.989 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:05:47.989 ==> default: -> value=-device, 00:05:47.989 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:05:47.989 ==> default: -> value=-device, 00:05:47.989 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:05:47.989 ==> default: -> value=-drive, 00:05:47.989 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme.img,if=none,id=nvme-1-drive0, 00:05:47.989 ==> default: -> value=-device, 00:05:47.989 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:47.989 ==> default: -> value=-device, 00:05:47.989 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:05:47.989 ==> default: -> value=-drive, 00:05:47.989 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:05:47.989 ==> default: -> value=-device, 00:05:47.989 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:47.989 ==> default: -> value=-drive, 00:05:47.989 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:05:47.989 ==> default: -> value=-device, 00:05:47.989 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:47.989 ==> default: -> value=-drive, 00:05:47.989 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:05:47.989 ==> default: -> value=-device, 00:05:47.989 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:47.989 ==> default: -> value=-device, 00:05:47.989 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:05:47.989 ==> default: -> value=-device, 00:05:47.989 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:05:47.989 ==> default: -> value=-drive, 00:05:47.989 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:05:47.989 ==> default: -> value=-device, 00:05:47.989 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:48.248 ==> default: Creating shared folders metadata... 00:05:48.248 ==> default: Starting domain. 00:05:50.150 ==> default: Waiting for domain to get an IP address... 00:06:05.099 ==> default: Waiting for SSH to become available... 00:06:06.475 ==> default: Configuring and enabling network interfaces... 00:06:11.745 default: SSH address: 192.168.121.67:22 00:06:11.745 default: SSH username: vagrant 00:06:11.745 default: SSH auth method: private key 00:06:13.645 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_3/spdk/ => /home/vagrant/spdk_repo/spdk 00:06:21.859 ==> default: Mounting SSHFS shared folder... 00:06:23.758 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_3/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:06:23.758 ==> default: Checking Mount.. 00:06:24.691 ==> default: Folder Successfully Mounted! 00:06:24.691 ==> default: Running provisioner: file... 00:06:25.626 default: ~/.gitconfig => .gitconfig 00:06:25.884 00:06:25.884 SUCCESS! 00:06:25.884 00:06:25.884 cd to /var/jenkins/workspace/nvme-vg-autotest_3/fedora39-libvirt and type "vagrant ssh" to use. 00:06:25.884 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:06:25.884 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest_3/fedora39-libvirt" to destroy all trace of vm. 00:06:25.884 00:06:25.893 [Pipeline] } 00:06:25.910 [Pipeline] // stage 00:06:25.919 [Pipeline] dir 00:06:25.920 Running in /var/jenkins/workspace/nvme-vg-autotest_3/fedora39-libvirt 00:06:25.921 [Pipeline] { 00:06:25.933 [Pipeline] catchError 00:06:25.935 [Pipeline] { 00:06:25.947 [Pipeline] sh 00:06:26.227 + vagrant ssh-config --host vagrant 00:06:26.227 + sed -ne /^Host/,$p 00:06:26.227 + tee ssh_conf 00:06:30.411 Host vagrant 00:06:30.411 HostName 192.168.121.67 00:06:30.411 User vagrant 00:06:30.411 Port 22 00:06:30.411 UserKnownHostsFile /dev/null 00:06:30.411 StrictHostKeyChecking no 00:06:30.411 PasswordAuthentication no 00:06:30.411 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:06:30.411 IdentitiesOnly yes 00:06:30.411 LogLevel FATAL 00:06:30.411 ForwardAgent yes 00:06:30.411 ForwardX11 yes 00:06:30.411 00:06:30.421 [Pipeline] withEnv 00:06:30.423 [Pipeline] { 00:06:30.433 [Pipeline] sh 00:06:30.717 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:06:30.717 source /etc/os-release 00:06:30.717 [[ -e /image.version ]] && img=$(< /image.version) 00:06:30.717 # Minimal, systemd-like check. 00:06:30.717 if [[ -e /.dockerenv ]]; then 00:06:30.717 # Clear garbage from the node's name: 00:06:30.717 # agt-er_autotest_547-896 -> autotest_547-896 00:06:30.717 # $HOSTNAME is the actual container id 00:06:30.717 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:06:30.717 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:06:30.717 # We can assume this is a mount from a host where container is running, 00:06:30.717 # so fetch its hostname to easily identify the target swarm worker. 00:06:30.717 container="$(< /etc/hostname) ($agent)" 00:06:30.717 else 00:06:30.717 # Fallback 00:06:30.717 container=$agent 00:06:30.717 fi 00:06:30.717 fi 00:06:30.717 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:06:30.717 00:06:30.987 [Pipeline] } 00:06:31.003 [Pipeline] // withEnv 00:06:31.011 [Pipeline] setCustomBuildProperty 00:06:31.025 [Pipeline] stage 00:06:31.027 [Pipeline] { (Tests) 00:06:31.044 [Pipeline] sh 00:06:31.323 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_3/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:06:31.594 [Pipeline] sh 00:06:31.872 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_3/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:06:32.146 [Pipeline] timeout 00:06:32.146 Timeout set to expire in 50 min 00:06:32.148 [Pipeline] { 00:06:32.164 [Pipeline] sh 00:06:32.444 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:06:33.010 HEAD is now at b4f857a04 env: add mem_map_fini and vtophys_fini for cleanup 00:06:33.022 [Pipeline] sh 00:06:33.300 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:06:33.571 [Pipeline] sh 00:06:33.849 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_3/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:06:34.123 [Pipeline] sh 00:06:34.403 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:06:34.661 ++ readlink -f spdk_repo 00:06:34.661 + DIR_ROOT=/home/vagrant/spdk_repo 00:06:34.661 + [[ -n /home/vagrant/spdk_repo ]] 00:06:34.661 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:06:34.661 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:06:34.661 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:06:34.661 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:06:34.661 + [[ -d /home/vagrant/spdk_repo/output ]] 00:06:34.661 + [[ nvme-vg-autotest == pkgdep-* ]] 00:06:34.661 + cd /home/vagrant/spdk_repo 00:06:34.661 + source /etc/os-release 00:06:34.661 ++ NAME='Fedora Linux' 00:06:34.661 ++ VERSION='39 (Cloud Edition)' 00:06:34.661 ++ ID=fedora 00:06:34.661 ++ VERSION_ID=39 00:06:34.661 ++ VERSION_CODENAME= 00:06:34.661 ++ PLATFORM_ID=platform:f39 00:06:34.661 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:06:34.661 ++ ANSI_COLOR='0;38;2;60;110;180' 00:06:34.661 ++ LOGO=fedora-logo-icon 00:06:34.661 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:06:34.661 ++ HOME_URL=https://fedoraproject.org/ 00:06:34.661 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:06:34.661 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:06:34.661 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:06:34.661 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:06:34.661 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:06:34.661 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:06:34.661 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:06:34.661 ++ SUPPORT_END=2024-11-12 00:06:34.661 ++ VARIANT='Cloud Edition' 00:06:34.661 ++ VARIANT_ID=cloud 00:06:34.661 + uname -a 00:06:34.661 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:06:34.661 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:34.919 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:35.178 Hugepages 00:06:35.178 node hugesize free / total 00:06:35.178 node0 1048576kB 0 / 0 00:06:35.178 node0 2048kB 0 / 0 00:06:35.178 00:06:35.178 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:35.437 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:35.437 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:06:35.437 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:35.437 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:06:35.437 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:06:35.437 + rm -f /tmp/spdk-ld-path 00:06:35.437 + source autorun-spdk.conf 00:06:35.437 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:06:35.437 ++ SPDK_TEST_NVME=1 00:06:35.437 ++ SPDK_TEST_FTL=1 00:06:35.437 ++ SPDK_TEST_ISAL=1 00:06:35.437 ++ SPDK_RUN_ASAN=1 00:06:35.437 ++ SPDK_RUN_UBSAN=1 00:06:35.437 ++ SPDK_TEST_XNVME=1 00:06:35.437 ++ SPDK_TEST_NVME_FDP=1 00:06:35.437 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:06:35.437 ++ RUN_NIGHTLY=0 00:06:35.437 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:06:35.437 + [[ -n '' ]] 00:06:35.437 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:06:35.437 + for M in /var/spdk/build-*-manifest.txt 00:06:35.437 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:06:35.437 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:06:35.437 + for M in /var/spdk/build-*-manifest.txt 00:06:35.437 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:06:35.437 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:06:35.437 + for M in /var/spdk/build-*-manifest.txt 00:06:35.437 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:06:35.437 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:06:35.437 ++ uname 00:06:35.437 + [[ Linux == \L\i\n\u\x ]] 00:06:35.437 + sudo dmesg -T 00:06:35.437 + sudo dmesg --clear 00:06:35.437 + dmesg_pid=5396 00:06:35.437 + [[ Fedora Linux == FreeBSD ]] 00:06:35.437 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:35.437 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:35.437 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:06:35.437 + [[ -x /usr/src/fio-static/fio ]] 00:06:35.437 + sudo dmesg -Tw 00:06:35.437 + export FIO_BIN=/usr/src/fio-static/fio 00:06:35.437 + FIO_BIN=/usr/src/fio-static/fio 00:06:35.437 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:06:35.437 + [[ ! -v VFIO_QEMU_BIN ]] 00:06:35.437 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:06:35.437 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:35.437 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:35.437 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:06:35.437 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:35.437 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:35.437 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:06:35.696 09:58:06 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:06:35.696 09:58:06 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:06:35.696 09:58:06 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:06:35.696 09:58:06 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:06:35.696 09:58:06 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:06:35.696 09:58:06 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:06:35.696 09:58:06 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:06:35.696 09:58:06 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:06:35.696 09:58:06 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:06:35.696 09:58:06 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:06:35.696 09:58:06 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:06:35.696 09:58:06 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:06:35.696 09:58:06 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:06:35.696 09:58:06 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:06:35.696 09:58:06 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:06:35.696 09:58:06 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:35.696 09:58:06 -- scripts/common.sh@15 -- $ shopt -s extglob 00:06:35.696 09:58:06 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:06:35.696 09:58:06 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:35.696 09:58:06 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:35.696 09:58:06 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.696 09:58:06 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.696 09:58:06 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.696 09:58:06 -- paths/export.sh@5 -- $ export PATH 00:06:35.696 09:58:06 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.696 09:58:06 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:06:35.696 09:58:06 -- common/autobuild_common.sh@493 -- $ date +%s 00:06:35.696 09:58:06 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733738286.XXXXXX 00:06:35.696 09:58:06 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733738286.KemH3v 00:06:35.696 09:58:06 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:06:35.696 09:58:06 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:06:35.696 09:58:06 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:06:35.696 09:58:06 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:06:35.696 09:58:06 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:06:35.696 09:58:06 -- common/autobuild_common.sh@509 -- $ get_config_params 00:06:35.696 09:58:06 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:06:35.696 09:58:06 -- common/autotest_common.sh@10 -- $ set +x 00:06:35.696 09:58:06 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:06:35.696 09:58:06 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:06:35.696 09:58:06 -- pm/common@17 -- $ local monitor 00:06:35.696 09:58:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:35.696 09:58:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:35.696 09:58:06 -- pm/common@21 -- $ date +%s 00:06:35.696 09:58:06 -- pm/common@25 -- $ sleep 1 00:06:35.696 09:58:06 -- pm/common@21 -- $ date +%s 00:06:35.696 09:58:06 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733738286 00:06:35.696 09:58:06 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733738286 00:06:35.696 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733738286_collect-cpu-load.pm.log 00:06:35.696 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733738286_collect-vmstat.pm.log 00:06:36.631 09:58:07 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:06:36.631 09:58:07 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:06:36.631 09:58:07 -- spdk/autobuild.sh@12 -- $ umask 022 00:06:36.631 09:58:07 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:06:36.631 09:58:07 -- spdk/autobuild.sh@16 -- $ date -u 00:06:36.631 Mon Dec 9 09:58:07 AM UTC 2024 00:06:36.631 09:58:07 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:06:36.631 v25.01-pre-315-gb4f857a04 00:06:36.631 09:58:07 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:06:36.631 09:58:07 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:06:36.631 09:58:07 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:06:36.631 09:58:07 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:06:36.631 09:58:07 -- common/autotest_common.sh@10 -- $ set +x 00:06:36.631 ************************************ 00:06:36.631 START TEST asan 00:06:36.631 ************************************ 00:06:36.631 using asan 00:06:36.631 09:58:07 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:06:36.631 00:06:36.631 real 0m0.000s 00:06:36.631 user 0m0.000s 00:06:36.631 sys 0m0.000s 00:06:36.631 09:58:07 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:06:36.631 ************************************ 00:06:36.631 END TEST asan 00:06:36.631 ************************************ 00:06:36.631 09:58:07 asan -- common/autotest_common.sh@10 -- $ set +x 00:06:36.890 09:58:07 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:06:36.890 09:58:07 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:06:36.890 09:58:07 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:06:36.890 09:58:07 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:06:36.890 09:58:07 -- common/autotest_common.sh@10 -- $ set +x 00:06:36.890 ************************************ 00:06:36.890 START TEST ubsan 00:06:36.890 ************************************ 00:06:36.890 using ubsan 00:06:36.890 09:58:07 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:06:36.890 00:06:36.890 real 0m0.000s 00:06:36.890 user 0m0.000s 00:06:36.890 sys 0m0.000s 00:06:36.890 09:58:07 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:06:36.890 ************************************ 00:06:36.890 09:58:07 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:06:36.890 END TEST ubsan 00:06:36.890 ************************************ 00:06:36.890 09:58:07 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:06:36.890 09:58:07 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:06:36.890 09:58:07 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:06:36.890 09:58:07 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:06:36.890 09:58:07 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:06:36.890 09:58:07 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:06:36.890 09:58:07 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:06:36.890 09:58:07 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:06:36.890 09:58:07 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:06:36.890 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:36.890 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:06:37.456 Using 'verbs' RDMA provider 00:06:50.627 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:07:05.537 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:07:05.537 Creating mk/config.mk...done. 00:07:05.537 Creating mk/cc.flags.mk...done. 00:07:05.537 Type 'make' to build. 00:07:05.537 09:58:35 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:07:05.537 09:58:35 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:07:05.537 09:58:35 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:07:05.537 09:58:35 -- common/autotest_common.sh@10 -- $ set +x 00:07:05.537 ************************************ 00:07:05.537 START TEST make 00:07:05.537 ************************************ 00:07:05.537 09:58:35 make -- common/autotest_common.sh@1129 -- $ make -j10 00:07:05.537 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:07:05.537 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:07:05.537 meson setup builddir \ 00:07:05.537 -Dwith-libaio=enabled \ 00:07:05.537 -Dwith-liburing=enabled \ 00:07:05.537 -Dwith-libvfn=disabled \ 00:07:05.537 -Dwith-spdk=disabled \ 00:07:05.537 -Dexamples=false \ 00:07:05.537 -Dtests=false \ 00:07:05.537 -Dtools=false && \ 00:07:05.537 meson compile -C builddir && \ 00:07:05.537 cd -) 00:07:05.537 make[1]: Nothing to be done for 'all'. 00:07:07.441 The Meson build system 00:07:07.441 Version: 1.5.0 00:07:07.441 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:07:07.441 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:07:07.441 Build type: native build 00:07:07.441 Project name: xnvme 00:07:07.441 Project version: 0.7.5 00:07:07.441 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:07:07.441 C linker for the host machine: cc ld.bfd 2.40-14 00:07:07.441 Host machine cpu family: x86_64 00:07:07.441 Host machine cpu: x86_64 00:07:07.441 Message: host_machine.system: linux 00:07:07.441 Compiler for C supports arguments -Wno-missing-braces: YES 00:07:07.441 Compiler for C supports arguments -Wno-cast-function-type: YES 00:07:07.441 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:07:07.441 Run-time dependency threads found: YES 00:07:07.441 Has header "setupapi.h" : NO 00:07:07.441 Has header "linux/blkzoned.h" : YES 00:07:07.441 Has header "linux/blkzoned.h" : YES (cached) 00:07:07.441 Has header "libaio.h" : YES 00:07:07.441 Library aio found: YES 00:07:07.441 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:07:07.441 Run-time dependency liburing found: YES 2.2 00:07:07.441 Dependency libvfn skipped: feature with-libvfn disabled 00:07:07.441 Found CMake: /usr/bin/cmake (3.27.7) 00:07:07.441 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:07:07.441 Subproject spdk : skipped: feature with-spdk disabled 00:07:07.441 Run-time dependency appleframeworks found: NO (tried framework) 00:07:07.442 Run-time dependency appleframeworks found: NO (tried framework) 00:07:07.442 Library rt found: YES 00:07:07.442 Checking for function "clock_gettime" with dependency -lrt: YES 00:07:07.442 Configuring xnvme_config.h using configuration 00:07:07.442 Configuring xnvme.spec using configuration 00:07:07.442 Run-time dependency bash-completion found: YES 2.11 00:07:07.442 Message: Bash-completions: /usr/share/bash-completion/completions 00:07:07.442 Program cp found: YES (/usr/bin/cp) 00:07:07.442 Build targets in project: 3 00:07:07.442 00:07:07.442 xnvme 0.7.5 00:07:07.442 00:07:07.442 Subprojects 00:07:07.442 spdk : NO Feature 'with-spdk' disabled 00:07:07.442 00:07:07.442 User defined options 00:07:07.442 examples : false 00:07:07.442 tests : false 00:07:07.442 tools : false 00:07:07.442 with-libaio : enabled 00:07:07.442 with-liburing: enabled 00:07:07.442 with-libvfn : disabled 00:07:07.442 with-spdk : disabled 00:07:07.442 00:07:07.442 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:07:08.007 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:07:08.007 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:07:08.007 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:07:08.007 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:07:08.007 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:07:08.007 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:07:08.007 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:07:08.007 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:07:08.007 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:07:08.007 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:07:08.007 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:07:08.007 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:07:08.007 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:07:08.266 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:07:08.266 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:07:08.266 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:07:08.266 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:07:08.266 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:07:08.266 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:07:08.266 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:07:08.266 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:07:08.266 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:07:08.266 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:07:08.266 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:07:08.266 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:07:08.266 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:07:08.266 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:07:08.266 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:07:08.266 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:07:08.266 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:07:08.266 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:07:08.266 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:07:08.266 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:07:08.266 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:07:08.524 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:07:08.524 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:07:08.524 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:07:08.524 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:07:08.524 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:07:08.524 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:07:08.524 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:07:08.524 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:07:08.524 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:07:08.524 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:07:08.524 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:07:08.524 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:07:08.524 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:07:08.524 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:07:08.524 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:07:08.524 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:07:08.524 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:07:08.524 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:07:08.524 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:07:08.524 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:07:08.524 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:07:08.524 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:07:08.524 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:07:08.782 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:07:08.782 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:07:08.782 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:07:08.782 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:07:08.782 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:07:08.782 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:07:08.782 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:07:08.782 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:07:08.782 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:07:08.782 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:07:08.782 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:07:08.782 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:07:08.782 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:07:08.782 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:07:08.782 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:07:09.043 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:07:09.043 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:07:09.305 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:07:09.563 [75/76] Linking static target lib/libxnvme.a 00:07:09.563 [76/76] Linking target lib/libxnvme.so.0.7.5 00:07:09.563 INFO: autodetecting backend as ninja 00:07:09.563 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:07:09.563 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:07:19.591 The Meson build system 00:07:19.591 Version: 1.5.0 00:07:19.591 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:07:19.591 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:07:19.591 Build type: native build 00:07:19.591 Program cat found: YES (/usr/bin/cat) 00:07:19.591 Project name: DPDK 00:07:19.591 Project version: 24.03.0 00:07:19.591 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:07:19.591 C linker for the host machine: cc ld.bfd 2.40-14 00:07:19.591 Host machine cpu family: x86_64 00:07:19.591 Host machine cpu: x86_64 00:07:19.591 Message: ## Building in Developer Mode ## 00:07:19.591 Program pkg-config found: YES (/usr/bin/pkg-config) 00:07:19.591 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:07:19.591 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:07:19.591 Program python3 found: YES (/usr/bin/python3) 00:07:19.591 Program cat found: YES (/usr/bin/cat) 00:07:19.591 Compiler for C supports arguments -march=native: YES 00:07:19.591 Checking for size of "void *" : 8 00:07:19.591 Checking for size of "void *" : 8 (cached) 00:07:19.591 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:07:19.591 Library m found: YES 00:07:19.591 Library numa found: YES 00:07:19.591 Has header "numaif.h" : YES 00:07:19.591 Library fdt found: NO 00:07:19.591 Library execinfo found: NO 00:07:19.591 Has header "execinfo.h" : YES 00:07:19.591 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:07:19.591 Run-time dependency libarchive found: NO (tried pkgconfig) 00:07:19.591 Run-time dependency libbsd found: NO (tried pkgconfig) 00:07:19.591 Run-time dependency jansson found: NO (tried pkgconfig) 00:07:19.591 Run-time dependency openssl found: YES 3.1.1 00:07:19.591 Run-time dependency libpcap found: YES 1.10.4 00:07:19.591 Has header "pcap.h" with dependency libpcap: YES 00:07:19.591 Compiler for C supports arguments -Wcast-qual: YES 00:07:19.591 Compiler for C supports arguments -Wdeprecated: YES 00:07:19.591 Compiler for C supports arguments -Wformat: YES 00:07:19.591 Compiler for C supports arguments -Wformat-nonliteral: NO 00:07:19.591 Compiler for C supports arguments -Wformat-security: NO 00:07:19.591 Compiler for C supports arguments -Wmissing-declarations: YES 00:07:19.591 Compiler for C supports arguments -Wmissing-prototypes: YES 00:07:19.591 Compiler for C supports arguments -Wnested-externs: YES 00:07:19.591 Compiler for C supports arguments -Wold-style-definition: YES 00:07:19.591 Compiler for C supports arguments -Wpointer-arith: YES 00:07:19.591 Compiler for C supports arguments -Wsign-compare: YES 00:07:19.591 Compiler for C supports arguments -Wstrict-prototypes: YES 00:07:19.591 Compiler for C supports arguments -Wundef: YES 00:07:19.591 Compiler for C supports arguments -Wwrite-strings: YES 00:07:19.591 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:07:19.591 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:07:19.591 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:07:19.591 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:07:19.591 Program objdump found: YES (/usr/bin/objdump) 00:07:19.591 Compiler for C supports arguments -mavx512f: YES 00:07:19.591 Checking if "AVX512 checking" compiles: YES 00:07:19.591 Fetching value of define "__SSE4_2__" : 1 00:07:19.592 Fetching value of define "__AES__" : 1 00:07:19.592 Fetching value of define "__AVX__" : 1 00:07:19.592 Fetching value of define "__AVX2__" : 1 00:07:19.592 Fetching value of define "__AVX512BW__" : (undefined) 00:07:19.592 Fetching value of define "__AVX512CD__" : (undefined) 00:07:19.592 Fetching value of define "__AVX512DQ__" : (undefined) 00:07:19.592 Fetching value of define "__AVX512F__" : (undefined) 00:07:19.592 Fetching value of define "__AVX512VL__" : (undefined) 00:07:19.592 Fetching value of define "__PCLMUL__" : 1 00:07:19.592 Fetching value of define "__RDRND__" : 1 00:07:19.592 Fetching value of define "__RDSEED__" : 1 00:07:19.592 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:07:19.592 Fetching value of define "__znver1__" : (undefined) 00:07:19.592 Fetching value of define "__znver2__" : (undefined) 00:07:19.592 Fetching value of define "__znver3__" : (undefined) 00:07:19.592 Fetching value of define "__znver4__" : (undefined) 00:07:19.592 Library asan found: YES 00:07:19.592 Compiler for C supports arguments -Wno-format-truncation: YES 00:07:19.592 Message: lib/log: Defining dependency "log" 00:07:19.592 Message: lib/kvargs: Defining dependency "kvargs" 00:07:19.592 Message: lib/telemetry: Defining dependency "telemetry" 00:07:19.592 Library rt found: YES 00:07:19.592 Checking for function "getentropy" : NO 00:07:19.592 Message: lib/eal: Defining dependency "eal" 00:07:19.592 Message: lib/ring: Defining dependency "ring" 00:07:19.592 Message: lib/rcu: Defining dependency "rcu" 00:07:19.592 Message: lib/mempool: Defining dependency "mempool" 00:07:19.592 Message: lib/mbuf: Defining dependency "mbuf" 00:07:19.592 Fetching value of define "__PCLMUL__" : 1 (cached) 00:07:19.592 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:07:19.592 Compiler for C supports arguments -mpclmul: YES 00:07:19.592 Compiler for C supports arguments -maes: YES 00:07:19.592 Compiler for C supports arguments -mavx512f: YES (cached) 00:07:19.592 Compiler for C supports arguments -mavx512bw: YES 00:07:19.592 Compiler for C supports arguments -mavx512dq: YES 00:07:19.592 Compiler for C supports arguments -mavx512vl: YES 00:07:19.592 Compiler for C supports arguments -mvpclmulqdq: YES 00:07:19.592 Compiler for C supports arguments -mavx2: YES 00:07:19.592 Compiler for C supports arguments -mavx: YES 00:07:19.592 Message: lib/net: Defining dependency "net" 00:07:19.592 Message: lib/meter: Defining dependency "meter" 00:07:19.592 Message: lib/ethdev: Defining dependency "ethdev" 00:07:19.592 Message: lib/pci: Defining dependency "pci" 00:07:19.592 Message: lib/cmdline: Defining dependency "cmdline" 00:07:19.592 Message: lib/hash: Defining dependency "hash" 00:07:19.592 Message: lib/timer: Defining dependency "timer" 00:07:19.592 Message: lib/compressdev: Defining dependency "compressdev" 00:07:19.592 Message: lib/cryptodev: Defining dependency "cryptodev" 00:07:19.592 Message: lib/dmadev: Defining dependency "dmadev" 00:07:19.592 Compiler for C supports arguments -Wno-cast-qual: YES 00:07:19.592 Message: lib/power: Defining dependency "power" 00:07:19.592 Message: lib/reorder: Defining dependency "reorder" 00:07:19.592 Message: lib/security: Defining dependency "security" 00:07:19.592 Has header "linux/userfaultfd.h" : YES 00:07:19.592 Has header "linux/vduse.h" : YES 00:07:19.592 Message: lib/vhost: Defining dependency "vhost" 00:07:19.592 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:07:19.592 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:07:19.592 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:07:19.592 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:07:19.592 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:07:19.592 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:07:19.592 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:07:19.592 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:07:19.592 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:07:19.592 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:07:19.592 Program doxygen found: YES (/usr/local/bin/doxygen) 00:07:19.592 Configuring doxy-api-html.conf using configuration 00:07:19.592 Configuring doxy-api-man.conf using configuration 00:07:19.592 Program mandb found: YES (/usr/bin/mandb) 00:07:19.592 Program sphinx-build found: NO 00:07:19.592 Configuring rte_build_config.h using configuration 00:07:19.592 Message: 00:07:19.592 ================= 00:07:19.592 Applications Enabled 00:07:19.592 ================= 00:07:19.592 00:07:19.592 apps: 00:07:19.592 00:07:19.592 00:07:19.592 Message: 00:07:19.592 ================= 00:07:19.592 Libraries Enabled 00:07:19.592 ================= 00:07:19.592 00:07:19.592 libs: 00:07:19.592 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:07:19.592 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:07:19.592 cryptodev, dmadev, power, reorder, security, vhost, 00:07:19.592 00:07:19.592 Message: 00:07:19.592 =============== 00:07:19.592 Drivers Enabled 00:07:19.592 =============== 00:07:19.592 00:07:19.592 common: 00:07:19.592 00:07:19.592 bus: 00:07:19.592 pci, vdev, 00:07:19.592 mempool: 00:07:19.592 ring, 00:07:19.592 dma: 00:07:19.592 00:07:19.592 net: 00:07:19.592 00:07:19.592 crypto: 00:07:19.592 00:07:19.592 compress: 00:07:19.592 00:07:19.592 vdpa: 00:07:19.592 00:07:19.592 00:07:19.592 Message: 00:07:19.592 ================= 00:07:19.592 Content Skipped 00:07:19.592 ================= 00:07:19.592 00:07:19.592 apps: 00:07:19.592 dumpcap: explicitly disabled via build config 00:07:19.592 graph: explicitly disabled via build config 00:07:19.592 pdump: explicitly disabled via build config 00:07:19.592 proc-info: explicitly disabled via build config 00:07:19.592 test-acl: explicitly disabled via build config 00:07:19.592 test-bbdev: explicitly disabled via build config 00:07:19.592 test-cmdline: explicitly disabled via build config 00:07:19.592 test-compress-perf: explicitly disabled via build config 00:07:19.592 test-crypto-perf: explicitly disabled via build config 00:07:19.592 test-dma-perf: explicitly disabled via build config 00:07:19.592 test-eventdev: explicitly disabled via build config 00:07:19.592 test-fib: explicitly disabled via build config 00:07:19.592 test-flow-perf: explicitly disabled via build config 00:07:19.592 test-gpudev: explicitly disabled via build config 00:07:19.592 test-mldev: explicitly disabled via build config 00:07:19.592 test-pipeline: explicitly disabled via build config 00:07:19.592 test-pmd: explicitly disabled via build config 00:07:19.592 test-regex: explicitly disabled via build config 00:07:19.592 test-sad: explicitly disabled via build config 00:07:19.592 test-security-perf: explicitly disabled via build config 00:07:19.592 00:07:19.592 libs: 00:07:19.592 argparse: explicitly disabled via build config 00:07:19.592 metrics: explicitly disabled via build config 00:07:19.592 acl: explicitly disabled via build config 00:07:19.592 bbdev: explicitly disabled via build config 00:07:19.592 bitratestats: explicitly disabled via build config 00:07:19.592 bpf: explicitly disabled via build config 00:07:19.592 cfgfile: explicitly disabled via build config 00:07:19.592 distributor: explicitly disabled via build config 00:07:19.592 efd: explicitly disabled via build config 00:07:19.592 eventdev: explicitly disabled via build config 00:07:19.592 dispatcher: explicitly disabled via build config 00:07:19.592 gpudev: explicitly disabled via build config 00:07:19.592 gro: explicitly disabled via build config 00:07:19.592 gso: explicitly disabled via build config 00:07:19.592 ip_frag: explicitly disabled via build config 00:07:19.592 jobstats: explicitly disabled via build config 00:07:19.592 latencystats: explicitly disabled via build config 00:07:19.592 lpm: explicitly disabled via build config 00:07:19.592 member: explicitly disabled via build config 00:07:19.592 pcapng: explicitly disabled via build config 00:07:19.592 rawdev: explicitly disabled via build config 00:07:19.592 regexdev: explicitly disabled via build config 00:07:19.592 mldev: explicitly disabled via build config 00:07:19.592 rib: explicitly disabled via build config 00:07:19.592 sched: explicitly disabled via build config 00:07:19.592 stack: explicitly disabled via build config 00:07:19.592 ipsec: explicitly disabled via build config 00:07:19.592 pdcp: explicitly disabled via build config 00:07:19.592 fib: explicitly disabled via build config 00:07:19.592 port: explicitly disabled via build config 00:07:19.592 pdump: explicitly disabled via build config 00:07:19.592 table: explicitly disabled via build config 00:07:19.592 pipeline: explicitly disabled via build config 00:07:19.592 graph: explicitly disabled via build config 00:07:19.592 node: explicitly disabled via build config 00:07:19.592 00:07:19.592 drivers: 00:07:19.592 common/cpt: not in enabled drivers build config 00:07:19.592 common/dpaax: not in enabled drivers build config 00:07:19.592 common/iavf: not in enabled drivers build config 00:07:19.592 common/idpf: not in enabled drivers build config 00:07:19.592 common/ionic: not in enabled drivers build config 00:07:19.592 common/mvep: not in enabled drivers build config 00:07:19.592 common/octeontx: not in enabled drivers build config 00:07:19.592 bus/auxiliary: not in enabled drivers build config 00:07:19.592 bus/cdx: not in enabled drivers build config 00:07:19.592 bus/dpaa: not in enabled drivers build config 00:07:19.592 bus/fslmc: not in enabled drivers build config 00:07:19.592 bus/ifpga: not in enabled drivers build config 00:07:19.592 bus/platform: not in enabled drivers build config 00:07:19.592 bus/uacce: not in enabled drivers build config 00:07:19.592 bus/vmbus: not in enabled drivers build config 00:07:19.592 common/cnxk: not in enabled drivers build config 00:07:19.592 common/mlx5: not in enabled drivers build config 00:07:19.592 common/nfp: not in enabled drivers build config 00:07:19.592 common/nitrox: not in enabled drivers build config 00:07:19.592 common/qat: not in enabled drivers build config 00:07:19.592 common/sfc_efx: not in enabled drivers build config 00:07:19.592 mempool/bucket: not in enabled drivers build config 00:07:19.593 mempool/cnxk: not in enabled drivers build config 00:07:19.593 mempool/dpaa: not in enabled drivers build config 00:07:19.593 mempool/dpaa2: not in enabled drivers build config 00:07:19.593 mempool/octeontx: not in enabled drivers build config 00:07:19.593 mempool/stack: not in enabled drivers build config 00:07:19.593 dma/cnxk: not in enabled drivers build config 00:07:19.593 dma/dpaa: not in enabled drivers build config 00:07:19.593 dma/dpaa2: not in enabled drivers build config 00:07:19.593 dma/hisilicon: not in enabled drivers build config 00:07:19.593 dma/idxd: not in enabled drivers build config 00:07:19.593 dma/ioat: not in enabled drivers build config 00:07:19.593 dma/skeleton: not in enabled drivers build config 00:07:19.593 net/af_packet: not in enabled drivers build config 00:07:19.593 net/af_xdp: not in enabled drivers build config 00:07:19.593 net/ark: not in enabled drivers build config 00:07:19.593 net/atlantic: not in enabled drivers build config 00:07:19.593 net/avp: not in enabled drivers build config 00:07:19.593 net/axgbe: not in enabled drivers build config 00:07:19.593 net/bnx2x: not in enabled drivers build config 00:07:19.593 net/bnxt: not in enabled drivers build config 00:07:19.593 net/bonding: not in enabled drivers build config 00:07:19.593 net/cnxk: not in enabled drivers build config 00:07:19.593 net/cpfl: not in enabled drivers build config 00:07:19.593 net/cxgbe: not in enabled drivers build config 00:07:19.593 net/dpaa: not in enabled drivers build config 00:07:19.593 net/dpaa2: not in enabled drivers build config 00:07:19.593 net/e1000: not in enabled drivers build config 00:07:19.593 net/ena: not in enabled drivers build config 00:07:19.593 net/enetc: not in enabled drivers build config 00:07:19.593 net/enetfec: not in enabled drivers build config 00:07:19.593 net/enic: not in enabled drivers build config 00:07:19.593 net/failsafe: not in enabled drivers build config 00:07:19.593 net/fm10k: not in enabled drivers build config 00:07:19.593 net/gve: not in enabled drivers build config 00:07:19.593 net/hinic: not in enabled drivers build config 00:07:19.593 net/hns3: not in enabled drivers build config 00:07:19.593 net/i40e: not in enabled drivers build config 00:07:19.593 net/iavf: not in enabled drivers build config 00:07:19.593 net/ice: not in enabled drivers build config 00:07:19.593 net/idpf: not in enabled drivers build config 00:07:19.593 net/igc: not in enabled drivers build config 00:07:19.593 net/ionic: not in enabled drivers build config 00:07:19.593 net/ipn3ke: not in enabled drivers build config 00:07:19.593 net/ixgbe: not in enabled drivers build config 00:07:19.593 net/mana: not in enabled drivers build config 00:07:19.593 net/memif: not in enabled drivers build config 00:07:19.593 net/mlx4: not in enabled drivers build config 00:07:19.593 net/mlx5: not in enabled drivers build config 00:07:19.593 net/mvneta: not in enabled drivers build config 00:07:19.593 net/mvpp2: not in enabled drivers build config 00:07:19.593 net/netvsc: not in enabled drivers build config 00:07:19.593 net/nfb: not in enabled drivers build config 00:07:19.593 net/nfp: not in enabled drivers build config 00:07:19.593 net/ngbe: not in enabled drivers build config 00:07:19.593 net/null: not in enabled drivers build config 00:07:19.593 net/octeontx: not in enabled drivers build config 00:07:19.593 net/octeon_ep: not in enabled drivers build config 00:07:19.593 net/pcap: not in enabled drivers build config 00:07:19.593 net/pfe: not in enabled drivers build config 00:07:19.593 net/qede: not in enabled drivers build config 00:07:19.593 net/ring: not in enabled drivers build config 00:07:19.593 net/sfc: not in enabled drivers build config 00:07:19.593 net/softnic: not in enabled drivers build config 00:07:19.593 net/tap: not in enabled drivers build config 00:07:19.593 net/thunderx: not in enabled drivers build config 00:07:19.593 net/txgbe: not in enabled drivers build config 00:07:19.593 net/vdev_netvsc: not in enabled drivers build config 00:07:19.593 net/vhost: not in enabled drivers build config 00:07:19.593 net/virtio: not in enabled drivers build config 00:07:19.593 net/vmxnet3: not in enabled drivers build config 00:07:19.593 raw/*: missing internal dependency, "rawdev" 00:07:19.593 crypto/armv8: not in enabled drivers build config 00:07:19.593 crypto/bcmfs: not in enabled drivers build config 00:07:19.593 crypto/caam_jr: not in enabled drivers build config 00:07:19.593 crypto/ccp: not in enabled drivers build config 00:07:19.593 crypto/cnxk: not in enabled drivers build config 00:07:19.593 crypto/dpaa_sec: not in enabled drivers build config 00:07:19.593 crypto/dpaa2_sec: not in enabled drivers build config 00:07:19.593 crypto/ipsec_mb: not in enabled drivers build config 00:07:19.593 crypto/mlx5: not in enabled drivers build config 00:07:19.593 crypto/mvsam: not in enabled drivers build config 00:07:19.593 crypto/nitrox: not in enabled drivers build config 00:07:19.593 crypto/null: not in enabled drivers build config 00:07:19.593 crypto/octeontx: not in enabled drivers build config 00:07:19.593 crypto/openssl: not in enabled drivers build config 00:07:19.593 crypto/scheduler: not in enabled drivers build config 00:07:19.593 crypto/uadk: not in enabled drivers build config 00:07:19.593 crypto/virtio: not in enabled drivers build config 00:07:19.593 compress/isal: not in enabled drivers build config 00:07:19.593 compress/mlx5: not in enabled drivers build config 00:07:19.593 compress/nitrox: not in enabled drivers build config 00:07:19.593 compress/octeontx: not in enabled drivers build config 00:07:19.593 compress/zlib: not in enabled drivers build config 00:07:19.593 regex/*: missing internal dependency, "regexdev" 00:07:19.593 ml/*: missing internal dependency, "mldev" 00:07:19.593 vdpa/ifc: not in enabled drivers build config 00:07:19.593 vdpa/mlx5: not in enabled drivers build config 00:07:19.593 vdpa/nfp: not in enabled drivers build config 00:07:19.593 vdpa/sfc: not in enabled drivers build config 00:07:19.593 event/*: missing internal dependency, "eventdev" 00:07:19.593 baseband/*: missing internal dependency, "bbdev" 00:07:19.593 gpu/*: missing internal dependency, "gpudev" 00:07:19.593 00:07:19.593 00:07:19.593 Build targets in project: 85 00:07:19.593 00:07:19.593 DPDK 24.03.0 00:07:19.593 00:07:19.593 User defined options 00:07:19.593 buildtype : debug 00:07:19.593 default_library : shared 00:07:19.593 libdir : lib 00:07:19.593 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:07:19.593 b_sanitize : address 00:07:19.593 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:07:19.593 c_link_args : 00:07:19.593 cpu_instruction_set: native 00:07:19.593 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:07:19.593 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:07:19.593 enable_docs : false 00:07:19.593 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:07:19.593 enable_kmods : false 00:07:19.593 max_lcores : 128 00:07:19.593 tests : false 00:07:19.593 00:07:19.593 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:07:20.160 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:07:20.160 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:07:20.161 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:07:20.161 [3/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:07:20.161 [4/268] Linking static target lib/librte_kvargs.a 00:07:20.161 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:07:20.161 [6/268] Linking static target lib/librte_log.a 00:07:20.727 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:07:20.727 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:07:20.985 [9/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:07:20.985 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:07:20.985 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:07:21.243 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:07:21.243 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:07:21.243 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:07:21.243 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:07:21.243 [16/268] Linking static target lib/librte_telemetry.a 00:07:21.243 [17/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:07:21.501 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:07:21.501 [19/268] Linking target lib/librte_log.so.24.1 00:07:21.501 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:07:21.759 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:07:21.759 [22/268] Linking target lib/librte_kvargs.so.24.1 00:07:22.016 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:07:22.016 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:07:22.016 [25/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:07:22.016 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:07:22.275 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:07:22.275 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:07:22.275 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:07:22.275 [30/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:07:22.275 [31/268] Linking target lib/librte_telemetry.so.24.1 00:07:22.275 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:07:22.534 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:07:22.534 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:07:22.792 [35/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:07:22.792 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:07:22.792 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:07:23.051 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:07:23.309 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:07:23.309 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:07:23.309 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:07:23.309 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:07:23.566 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:07:23.566 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:07:23.823 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:07:23.823 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:07:23.823 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:07:23.823 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:07:23.823 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:07:24.082 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:07:24.082 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:07:24.082 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:07:24.646 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:07:24.647 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:07:24.647 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:07:24.903 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:07:24.903 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:07:24.903 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:07:24.903 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:07:24.903 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:07:25.161 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:07:25.161 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:07:25.161 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:07:25.418 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:07:25.675 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:07:25.675 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:07:25.675 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:07:25.953 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:07:25.953 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:07:25.953 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:07:25.953 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:07:25.953 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:07:26.210 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:07:26.211 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:07:26.211 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:07:26.211 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:07:26.468 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:07:26.468 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:07:26.468 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:07:26.468 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:07:26.468 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:07:26.727 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:07:26.727 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:07:26.985 [84/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:07:26.985 [85/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:07:27.244 [86/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:07:27.244 [87/268] Linking static target lib/librte_eal.a 00:07:27.244 [88/268] Linking static target lib/librte_rcu.a 00:07:27.244 [89/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:07:27.244 [90/268] Linking static target lib/librte_ring.a 00:07:27.244 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:07:27.244 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:07:27.244 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:07:27.502 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:07:27.761 [95/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:07:27.761 [96/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:07:27.761 [97/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:07:27.761 [98/268] Linking static target lib/librte_mempool.a 00:07:27.761 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:07:28.019 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:07:28.277 [101/268] Linking static target lib/librte_mbuf.a 00:07:28.277 [102/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:07:28.277 [103/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:07:28.277 [104/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:07:28.277 [105/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:07:28.277 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:07:28.277 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:07:28.844 [108/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:07:28.844 [109/268] Linking static target lib/librte_net.a 00:07:28.844 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:07:28.844 [111/268] Linking static target lib/librte_meter.a 00:07:28.844 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:07:28.844 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:07:29.103 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:07:29.103 [115/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:07:29.361 [116/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:07:29.361 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:07:29.361 [118/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:07:29.361 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:07:29.621 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:07:30.187 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:07:30.187 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:07:30.187 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:07:30.187 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:07:30.480 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:07:30.480 [126/268] Linking static target lib/librte_pci.a 00:07:30.480 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:07:30.738 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:07:30.738 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:07:30.996 [130/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:07:30.996 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:07:30.996 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:07:30.996 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:07:30.996 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:07:30.996 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:07:30.996 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:07:31.255 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:07:31.255 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:07:31.255 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:07:31.255 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:07:31.255 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:07:31.255 [142/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:07:31.513 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:07:31.513 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:07:31.513 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:07:31.771 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:07:31.771 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:07:31.771 [148/268] Linking static target lib/librte_cmdline.a 00:07:32.029 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:07:32.286 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:07:32.286 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:07:32.286 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:07:32.286 [153/268] Linking static target lib/librte_timer.a 00:07:32.544 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:07:32.801 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:07:33.058 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:07:33.058 [157/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:07:33.315 [158/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:07:33.315 [159/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:07:33.315 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:07:33.315 [161/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:07:33.315 [162/268] Linking static target lib/librte_compressdev.a 00:07:33.315 [163/268] Linking static target lib/librte_hash.a 00:07:33.880 [164/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:07:33.880 [165/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:07:33.880 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:07:34.138 [167/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:07:34.138 [168/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:07:34.138 [169/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:07:34.396 [170/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:34.396 [171/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:07:34.396 [172/268] Linking static target lib/librte_dmadev.a 00:07:34.960 [173/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:07:34.960 [174/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:07:34.960 [175/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:07:34.960 [176/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:07:34.960 [177/268] Linking static target lib/librte_cryptodev.a 00:07:34.960 [178/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:07:35.218 [179/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:07:35.218 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:07:35.783 [181/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:35.783 [182/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:07:35.783 [183/268] Linking static target lib/librte_reorder.a 00:07:35.783 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:07:35.783 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:07:35.783 [186/268] Linking static target lib/librte_power.a 00:07:36.041 [187/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:07:36.041 [188/268] Linking static target lib/librte_security.a 00:07:36.299 [189/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:07:36.299 [190/268] Linking static target lib/librte_ethdev.a 00:07:36.558 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:07:36.558 [192/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:07:36.558 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:07:36.818 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:07:37.076 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:07:37.334 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:07:37.334 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:07:37.593 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:07:37.593 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:07:38.270 [200/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:38.270 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:07:38.270 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:07:38.270 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:07:38.270 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:07:38.530 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:07:38.530 [206/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:07:38.789 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:07:39.047 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:07:39.047 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:07:39.047 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:07:39.047 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:07:39.305 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:07:39.305 [213/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:07:39.305 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:07:39.305 [215/268] Linking static target drivers/librte_bus_vdev.a 00:07:39.305 [216/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:07:39.305 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:07:39.305 [218/268] Linking static target drivers/librte_bus_pci.a 00:07:39.305 [219/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:07:39.564 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:07:39.564 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:07:39.832 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:39.832 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:07:39.832 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:07:39.832 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:07:39.832 [226/268] Linking static target drivers/librte_mempool_ring.a 00:07:40.096 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:07:40.663 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:07:40.921 [229/268] Linking target lib/librte_eal.so.24.1 00:07:40.921 [230/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:07:40.921 [231/268] Linking target lib/librte_ring.so.24.1 00:07:40.921 [232/268] Linking target lib/librte_meter.so.24.1 00:07:41.179 [233/268] Linking target lib/librte_timer.so.24.1 00:07:41.179 [234/268] Linking target drivers/librte_bus_vdev.so.24.1 00:07:41.179 [235/268] Linking target lib/librte_pci.so.24.1 00:07:41.179 [236/268] Linking target lib/librte_dmadev.so.24.1 00:07:41.179 [237/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:07:41.179 [238/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:07:41.179 [239/268] Linking target lib/librte_mempool.so.24.1 00:07:41.180 [240/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:07:41.180 [241/268] Linking target lib/librte_rcu.so.24.1 00:07:41.180 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:07:41.437 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:07:41.437 [244/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:07:41.437 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:07:41.437 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:07:41.437 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:07:41.437 [248/268] Linking target lib/librte_mbuf.so.24.1 00:07:41.437 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:07:41.695 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:07:41.695 [251/268] Linking target lib/librte_cryptodev.so.24.1 00:07:41.695 [252/268] Linking target lib/librte_compressdev.so.24.1 00:07:41.695 [253/268] Linking target lib/librte_reorder.so.24.1 00:07:41.695 [254/268] Linking target lib/librte_net.so.24.1 00:07:41.953 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:07:41.953 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:07:41.953 [257/268] Linking target lib/librte_security.so.24.1 00:07:41.953 [258/268] Linking target lib/librte_cmdline.so.24.1 00:07:41.953 [259/268] Linking target lib/librte_hash.so.24.1 00:07:42.212 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:07:44.747 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:44.747 [262/268] Linking target lib/librte_ethdev.so.24.1 00:07:44.747 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:07:44.747 [264/268] Linking target lib/librte_power.so.24.1 00:07:46.648 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:07:46.648 [266/268] Linking static target lib/librte_vhost.a 00:07:48.022 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:07:48.280 [268/268] Linking target lib/librte_vhost.so.24.1 00:07:48.280 INFO: autodetecting backend as ninja 00:07:48.280 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:08:14.828 CC lib/ut/ut.o 00:08:14.828 CC lib/ut_mock/mock.o 00:08:14.828 CC lib/log/log.o 00:08:14.828 CC lib/log/log_deprecated.o 00:08:14.828 CC lib/log/log_flags.o 00:08:14.828 LIB libspdk_ut.a 00:08:14.828 SO libspdk_ut.so.2.0 00:08:14.828 LIB libspdk_ut_mock.a 00:08:14.828 SO libspdk_ut_mock.so.6.0 00:08:14.828 SYMLINK libspdk_ut.so 00:08:14.828 LIB libspdk_log.a 00:08:14.828 SO libspdk_log.so.7.1 00:08:14.828 SYMLINK libspdk_ut_mock.so 00:08:14.828 SYMLINK libspdk_log.so 00:08:14.828 CC lib/util/bit_array.o 00:08:14.828 CC lib/util/base64.o 00:08:14.828 CC lib/util/cpuset.o 00:08:14.828 CC lib/util/crc32c.o 00:08:14.828 CC lib/util/crc16.o 00:08:14.828 CC lib/util/crc32.o 00:08:14.828 CXX lib/trace_parser/trace.o 00:08:14.828 CC lib/ioat/ioat.o 00:08:14.828 CC lib/dma/dma.o 00:08:14.828 CC lib/vfio_user/host/vfio_user_pci.o 00:08:14.828 CC lib/vfio_user/host/vfio_user.o 00:08:14.828 CC lib/util/crc32_ieee.o 00:08:14.828 CC lib/util/crc64.o 00:08:14.828 CC lib/util/dif.o 00:08:14.828 CC lib/util/fd.o 00:08:14.828 LIB libspdk_dma.a 00:08:14.828 SO libspdk_dma.so.5.0 00:08:14.828 CC lib/util/fd_group.o 00:08:14.828 CC lib/util/file.o 00:08:14.828 SYMLINK libspdk_dma.so 00:08:14.828 CC lib/util/hexlify.o 00:08:14.828 CC lib/util/iov.o 00:08:14.828 LIB libspdk_ioat.a 00:08:14.828 CC lib/util/math.o 00:08:14.828 CC lib/util/net.o 00:08:14.828 SO libspdk_ioat.so.7.0 00:08:14.828 LIB libspdk_vfio_user.a 00:08:14.828 SO libspdk_vfio_user.so.5.0 00:08:14.828 SYMLINK libspdk_ioat.so 00:08:14.828 CC lib/util/pipe.o 00:08:14.828 CC lib/util/strerror_tls.o 00:08:14.828 CC lib/util/string.o 00:08:14.828 SYMLINK libspdk_vfio_user.so 00:08:14.828 CC lib/util/uuid.o 00:08:14.828 CC lib/util/xor.o 00:08:14.828 CC lib/util/zipf.o 00:08:14.828 CC lib/util/md5.o 00:08:14.828 LIB libspdk_util.a 00:08:14.828 SO libspdk_util.so.10.1 00:08:14.828 LIB libspdk_trace_parser.a 00:08:14.828 SO libspdk_trace_parser.so.6.0 00:08:14.828 SYMLINK libspdk_trace_parser.so 00:08:14.828 SYMLINK libspdk_util.so 00:08:14.828 CC lib/rdma_utils/rdma_utils.o 00:08:14.828 CC lib/conf/conf.o 00:08:14.828 CC lib/env_dpdk/env.o 00:08:14.828 CC lib/env_dpdk/memory.o 00:08:14.828 CC lib/env_dpdk/pci.o 00:08:14.828 CC lib/env_dpdk/init.o 00:08:14.828 CC lib/env_dpdk/threads.o 00:08:14.828 CC lib/json/json_parse.o 00:08:14.828 CC lib/idxd/idxd.o 00:08:14.828 CC lib/vmd/vmd.o 00:08:14.828 CC lib/env_dpdk/pci_ioat.o 00:08:14.828 CC lib/json/json_util.o 00:08:14.828 CC lib/json/json_write.o 00:08:14.828 LIB libspdk_conf.a 00:08:14.828 LIB libspdk_rdma_utils.a 00:08:14.828 SO libspdk_conf.so.6.0 00:08:14.828 SO libspdk_rdma_utils.so.1.0 00:08:14.828 SYMLINK libspdk_conf.so 00:08:14.828 CC lib/env_dpdk/pci_virtio.o 00:08:14.828 SYMLINK libspdk_rdma_utils.so 00:08:14.828 CC lib/env_dpdk/pci_vmd.o 00:08:14.828 CC lib/env_dpdk/pci_idxd.o 00:08:14.828 CC lib/env_dpdk/pci_event.o 00:08:14.828 CC lib/env_dpdk/sigbus_handler.o 00:08:14.828 CC lib/env_dpdk/pci_dpdk.o 00:08:14.828 CC lib/rdma_provider/common.o 00:08:14.828 LIB libspdk_json.a 00:08:14.828 CC lib/env_dpdk/pci_dpdk_2207.o 00:08:14.828 SO libspdk_json.so.6.0 00:08:14.828 CC lib/env_dpdk/pci_dpdk_2211.o 00:08:14.828 CC lib/idxd/idxd_user.o 00:08:14.828 SYMLINK libspdk_json.so 00:08:14.828 CC lib/vmd/led.o 00:08:14.828 CC lib/idxd/idxd_kernel.o 00:08:14.828 CC lib/rdma_provider/rdma_provider_verbs.o 00:08:14.828 LIB libspdk_vmd.a 00:08:14.828 SO libspdk_vmd.so.6.0 00:08:14.828 CC lib/jsonrpc/jsonrpc_server.o 00:08:14.828 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:08:14.828 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:08:14.828 CC lib/jsonrpc/jsonrpc_client.o 00:08:14.828 LIB libspdk_idxd.a 00:08:14.828 SYMLINK libspdk_vmd.so 00:08:14.828 SO libspdk_idxd.so.12.1 00:08:14.828 LIB libspdk_rdma_provider.a 00:08:14.828 SYMLINK libspdk_idxd.so 00:08:14.828 SO libspdk_rdma_provider.so.7.0 00:08:14.828 SYMLINK libspdk_rdma_provider.so 00:08:14.828 LIB libspdk_jsonrpc.a 00:08:14.828 SO libspdk_jsonrpc.so.6.0 00:08:14.828 SYMLINK libspdk_jsonrpc.so 00:08:15.086 CC lib/rpc/rpc.o 00:08:15.374 LIB libspdk_env_dpdk.a 00:08:15.374 LIB libspdk_rpc.a 00:08:15.374 SO libspdk_rpc.so.6.0 00:08:15.374 SO libspdk_env_dpdk.so.15.1 00:08:15.374 SYMLINK libspdk_rpc.so 00:08:15.632 SYMLINK libspdk_env_dpdk.so 00:08:15.632 CC lib/trace/trace_flags.o 00:08:15.632 CC lib/trace/trace.o 00:08:15.632 CC lib/trace/trace_rpc.o 00:08:15.632 CC lib/notify/notify.o 00:08:15.632 CC lib/notify/notify_rpc.o 00:08:15.632 CC lib/keyring/keyring.o 00:08:15.632 CC lib/keyring/keyring_rpc.o 00:08:15.890 LIB libspdk_notify.a 00:08:15.890 SO libspdk_notify.so.6.0 00:08:15.890 LIB libspdk_keyring.a 00:08:15.890 SO libspdk_keyring.so.2.0 00:08:16.148 LIB libspdk_trace.a 00:08:16.148 SYMLINK libspdk_notify.so 00:08:16.148 SO libspdk_trace.so.11.0 00:08:16.148 SYMLINK libspdk_keyring.so 00:08:16.148 SYMLINK libspdk_trace.so 00:08:16.409 CC lib/thread/thread.o 00:08:16.409 CC lib/thread/iobuf.o 00:08:16.409 CC lib/sock/sock_rpc.o 00:08:16.409 CC lib/sock/sock.o 00:08:16.976 LIB libspdk_sock.a 00:08:16.976 SO libspdk_sock.so.10.0 00:08:17.234 SYMLINK libspdk_sock.so 00:08:17.493 CC lib/nvme/nvme_ctrlr_cmd.o 00:08:17.493 CC lib/nvme/nvme_ctrlr.o 00:08:17.493 CC lib/nvme/nvme_fabric.o 00:08:17.493 CC lib/nvme/nvme_ns_cmd.o 00:08:17.493 CC lib/nvme/nvme_ns.o 00:08:17.493 CC lib/nvme/nvme_pcie.o 00:08:17.493 CC lib/nvme/nvme_pcie_common.o 00:08:17.493 CC lib/nvme/nvme_qpair.o 00:08:17.493 CC lib/nvme/nvme.o 00:08:18.428 CC lib/nvme/nvme_quirks.o 00:08:18.428 CC lib/nvme/nvme_transport.o 00:08:18.428 CC lib/nvme/nvme_discovery.o 00:08:18.428 LIB libspdk_thread.a 00:08:18.428 SO libspdk_thread.so.11.0 00:08:18.686 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:08:18.686 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:08:18.686 SYMLINK libspdk_thread.so 00:08:18.686 CC lib/nvme/nvme_tcp.o 00:08:18.686 CC lib/nvme/nvme_opal.o 00:08:18.686 CC lib/accel/accel.o 00:08:18.945 CC lib/blob/blobstore.o 00:08:19.203 CC lib/blob/request.o 00:08:19.203 CC lib/init/json_config.o 00:08:19.203 CC lib/init/subsystem.o 00:08:19.461 CC lib/virtio/virtio.o 00:08:19.461 CC lib/blob/zeroes.o 00:08:19.461 CC lib/fsdev/fsdev.o 00:08:19.461 CC lib/fsdev/fsdev_io.o 00:08:19.461 CC lib/init/subsystem_rpc.o 00:08:19.461 CC lib/init/rpc.o 00:08:19.719 CC lib/nvme/nvme_io_msg.o 00:08:19.719 CC lib/blob/blob_bs_dev.o 00:08:19.719 CC lib/virtio/virtio_vhost_user.o 00:08:19.719 LIB libspdk_init.a 00:08:19.719 SO libspdk_init.so.6.0 00:08:19.978 SYMLINK libspdk_init.so 00:08:19.978 CC lib/accel/accel_rpc.o 00:08:19.978 CC lib/accel/accel_sw.o 00:08:19.978 CC lib/fsdev/fsdev_rpc.o 00:08:20.237 CC lib/virtio/virtio_vfio_user.o 00:08:20.237 CC lib/virtio/virtio_pci.o 00:08:20.237 CC lib/nvme/nvme_poll_group.o 00:08:20.237 CC lib/nvme/nvme_zns.o 00:08:20.237 CC lib/event/app.o 00:08:20.237 LIB libspdk_fsdev.a 00:08:20.237 CC lib/nvme/nvme_stubs.o 00:08:20.237 LIB libspdk_accel.a 00:08:20.513 SO libspdk_fsdev.so.2.0 00:08:20.513 SO libspdk_accel.so.16.0 00:08:20.513 CC lib/nvme/nvme_auth.o 00:08:20.513 SYMLINK libspdk_fsdev.so 00:08:20.513 CC lib/event/reactor.o 00:08:20.513 SYMLINK libspdk_accel.so 00:08:20.513 CC lib/event/log_rpc.o 00:08:20.513 LIB libspdk_virtio.a 00:08:20.513 CC lib/event/app_rpc.o 00:08:20.513 SO libspdk_virtio.so.7.0 00:08:20.771 SYMLINK libspdk_virtio.so 00:08:20.771 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:08:20.771 CC lib/bdev/bdev.o 00:08:20.771 CC lib/event/scheduler_static.o 00:08:20.771 CC lib/nvme/nvme_cuse.o 00:08:21.030 CC lib/bdev/bdev_rpc.o 00:08:21.030 CC lib/bdev/bdev_zone.o 00:08:21.030 CC lib/nvme/nvme_rdma.o 00:08:21.030 CC lib/bdev/part.o 00:08:21.030 LIB libspdk_event.a 00:08:21.030 SO libspdk_event.so.14.0 00:08:21.288 SYMLINK libspdk_event.so 00:08:21.288 CC lib/bdev/scsi_nvme.o 00:08:21.546 LIB libspdk_fuse_dispatcher.a 00:08:21.805 SO libspdk_fuse_dispatcher.so.1.0 00:08:21.805 SYMLINK libspdk_fuse_dispatcher.so 00:08:23.182 LIB libspdk_nvme.a 00:08:23.182 SO libspdk_nvme.so.15.0 00:08:23.441 LIB libspdk_blob.a 00:08:23.441 SO libspdk_blob.so.12.0 00:08:23.699 SYMLINK libspdk_blob.so 00:08:23.699 SYMLINK libspdk_nvme.so 00:08:23.958 CC lib/lvol/lvol.o 00:08:23.958 CC lib/blobfs/blobfs.o 00:08:23.958 CC lib/blobfs/tree.o 00:08:24.900 LIB libspdk_bdev.a 00:08:24.900 SO libspdk_bdev.so.17.0 00:08:25.159 SYMLINK libspdk_bdev.so 00:08:25.159 LIB libspdk_blobfs.a 00:08:25.159 SO libspdk_blobfs.so.11.0 00:08:25.159 LIB libspdk_lvol.a 00:08:25.159 CC lib/ublk/ublk.o 00:08:25.159 CC lib/nvmf/ctrlr.o 00:08:25.159 CC lib/ublk/ublk_rpc.o 00:08:25.159 CC lib/nvmf/ctrlr_discovery.o 00:08:25.159 CC lib/nvmf/ctrlr_bdev.o 00:08:25.159 SO libspdk_lvol.so.11.0 00:08:25.159 CC lib/nbd/nbd.o 00:08:25.159 CC lib/ftl/ftl_core.o 00:08:25.159 CC lib/scsi/dev.o 00:08:25.159 SYMLINK libspdk_blobfs.so 00:08:25.159 CC lib/nvmf/subsystem.o 00:08:25.417 SYMLINK libspdk_lvol.so 00:08:25.417 CC lib/nvmf/nvmf.o 00:08:25.417 CC lib/nvmf/nvmf_rpc.o 00:08:25.417 CC lib/scsi/lun.o 00:08:25.676 CC lib/ftl/ftl_init.o 00:08:25.935 CC lib/nbd/nbd_rpc.o 00:08:25.935 CC lib/ftl/ftl_layout.o 00:08:25.935 CC lib/scsi/port.o 00:08:25.935 LIB libspdk_nbd.a 00:08:25.935 CC lib/ftl/ftl_debug.o 00:08:25.935 SO libspdk_nbd.so.7.0 00:08:26.193 LIB libspdk_ublk.a 00:08:26.193 CC lib/scsi/scsi.o 00:08:26.193 SYMLINK libspdk_nbd.so 00:08:26.193 CC lib/scsi/scsi_bdev.o 00:08:26.193 CC lib/scsi/scsi_pr.o 00:08:26.193 SO libspdk_ublk.so.3.0 00:08:26.193 SYMLINK libspdk_ublk.so 00:08:26.193 CC lib/ftl/ftl_io.o 00:08:26.193 CC lib/nvmf/transport.o 00:08:26.193 CC lib/nvmf/tcp.o 00:08:26.451 CC lib/nvmf/stubs.o 00:08:26.451 CC lib/nvmf/mdns_server.o 00:08:26.451 CC lib/ftl/ftl_sb.o 00:08:26.451 CC lib/nvmf/rdma.o 00:08:26.451 CC lib/scsi/scsi_rpc.o 00:08:26.709 CC lib/ftl/ftl_l2p.o 00:08:26.709 CC lib/scsi/task.o 00:08:26.709 CC lib/ftl/ftl_l2p_flat.o 00:08:26.967 CC lib/nvmf/auth.o 00:08:26.967 CC lib/ftl/ftl_nv_cache.o 00:08:26.967 CC lib/ftl/ftl_band.o 00:08:26.967 LIB libspdk_scsi.a 00:08:26.967 CC lib/ftl/ftl_band_ops.o 00:08:26.967 CC lib/ftl/ftl_writer.o 00:08:27.225 SO libspdk_scsi.so.9.0 00:08:27.225 CC lib/ftl/ftl_rq.o 00:08:27.225 SYMLINK libspdk_scsi.so 00:08:27.225 CC lib/ftl/ftl_reloc.o 00:08:27.483 CC lib/ftl/ftl_l2p_cache.o 00:08:27.483 CC lib/ftl/ftl_p2l.o 00:08:27.742 CC lib/iscsi/conn.o 00:08:27.742 CC lib/ftl/ftl_p2l_log.o 00:08:27.742 CC lib/vhost/vhost.o 00:08:28.000 CC lib/iscsi/init_grp.o 00:08:28.000 CC lib/iscsi/iscsi.o 00:08:28.000 CC lib/iscsi/param.o 00:08:28.000 CC lib/vhost/vhost_rpc.o 00:08:28.000 CC lib/vhost/vhost_scsi.o 00:08:28.258 CC lib/iscsi/portal_grp.o 00:08:28.258 CC lib/ftl/mngt/ftl_mngt.o 00:08:28.258 CC lib/vhost/vhost_blk.o 00:08:28.517 CC lib/iscsi/tgt_node.o 00:08:28.517 CC lib/iscsi/iscsi_subsystem.o 00:08:28.517 CC lib/iscsi/iscsi_rpc.o 00:08:28.775 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:08:28.775 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:08:29.033 CC lib/iscsi/task.o 00:08:29.033 CC lib/vhost/rte_vhost_user.o 00:08:29.033 CC lib/ftl/mngt/ftl_mngt_startup.o 00:08:29.033 CC lib/ftl/mngt/ftl_mngt_md.o 00:08:29.033 CC lib/ftl/mngt/ftl_mngt_misc.o 00:08:29.033 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:08:29.291 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:08:29.291 CC lib/ftl/mngt/ftl_mngt_band.o 00:08:29.291 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:08:29.291 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:08:29.291 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:08:29.291 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:08:29.291 CC lib/ftl/utils/ftl_conf.o 00:08:29.549 CC lib/ftl/utils/ftl_md.o 00:08:29.549 CC lib/ftl/utils/ftl_mempool.o 00:08:29.549 LIB libspdk_nvmf.a 00:08:29.549 CC lib/ftl/utils/ftl_bitmap.o 00:08:29.549 CC lib/ftl/utils/ftl_property.o 00:08:29.549 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:08:29.549 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:08:29.825 SO libspdk_nvmf.so.20.0 00:08:29.825 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:08:29.825 LIB libspdk_iscsi.a 00:08:29.825 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:08:29.825 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:08:29.825 SO libspdk_iscsi.so.8.0 00:08:29.825 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:08:30.083 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:08:30.083 CC lib/ftl/upgrade/ftl_sb_v3.o 00:08:30.083 CC lib/ftl/upgrade/ftl_sb_v5.o 00:08:30.083 CC lib/ftl/nvc/ftl_nvc_dev.o 00:08:30.083 SYMLINK libspdk_nvmf.so 00:08:30.083 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:08:30.083 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:08:30.083 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:08:30.083 SYMLINK libspdk_iscsi.so 00:08:30.083 CC lib/ftl/base/ftl_base_dev.o 00:08:30.083 CC lib/ftl/base/ftl_base_bdev.o 00:08:30.083 CC lib/ftl/ftl_trace.o 00:08:30.342 LIB libspdk_vhost.a 00:08:30.342 SO libspdk_vhost.so.8.0 00:08:30.342 LIB libspdk_ftl.a 00:08:30.342 SYMLINK libspdk_vhost.so 00:08:30.599 SO libspdk_ftl.so.9.0 00:08:31.165 SYMLINK libspdk_ftl.so 00:08:31.423 CC module/env_dpdk/env_dpdk_rpc.o 00:08:31.423 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:08:31.423 CC module/scheduler/dynamic/scheduler_dynamic.o 00:08:31.423 CC module/blob/bdev/blob_bdev.o 00:08:31.423 CC module/accel/ioat/accel_ioat.o 00:08:31.423 CC module/fsdev/aio/fsdev_aio.o 00:08:31.423 CC module/keyring/file/keyring.o 00:08:31.423 CC module/accel/error/accel_error.o 00:08:31.423 CC module/sock/posix/posix.o 00:08:31.680 CC module/scheduler/gscheduler/gscheduler.o 00:08:31.680 LIB libspdk_env_dpdk_rpc.a 00:08:31.680 SO libspdk_env_dpdk_rpc.so.6.0 00:08:31.680 SYMLINK libspdk_env_dpdk_rpc.so 00:08:31.680 CC module/accel/error/accel_error_rpc.o 00:08:31.680 LIB libspdk_scheduler_dpdk_governor.a 00:08:31.680 CC module/keyring/file/keyring_rpc.o 00:08:31.680 LIB libspdk_scheduler_gscheduler.a 00:08:31.680 SO libspdk_scheduler_dpdk_governor.so.4.0 00:08:31.680 SO libspdk_scheduler_gscheduler.so.4.0 00:08:31.680 LIB libspdk_scheduler_dynamic.a 00:08:31.680 CC module/accel/ioat/accel_ioat_rpc.o 00:08:31.680 SO libspdk_scheduler_dynamic.so.4.0 00:08:31.939 SYMLINK libspdk_scheduler_dpdk_governor.so 00:08:31.939 SYMLINK libspdk_scheduler_gscheduler.so 00:08:31.939 CC module/fsdev/aio/fsdev_aio_rpc.o 00:08:31.939 LIB libspdk_accel_error.a 00:08:31.939 SYMLINK libspdk_scheduler_dynamic.so 00:08:31.939 SO libspdk_accel_error.so.2.0 00:08:31.939 LIB libspdk_keyring_file.a 00:08:31.939 LIB libspdk_blob_bdev.a 00:08:31.939 SO libspdk_blob_bdev.so.12.0 00:08:31.939 SO libspdk_keyring_file.so.2.0 00:08:31.939 SYMLINK libspdk_accel_error.so 00:08:31.939 LIB libspdk_accel_ioat.a 00:08:31.939 SYMLINK libspdk_blob_bdev.so 00:08:31.939 SYMLINK libspdk_keyring_file.so 00:08:31.939 CC module/fsdev/aio/linux_aio_mgr.o 00:08:31.939 CC module/accel/dsa/accel_dsa.o 00:08:31.939 CC module/accel/iaa/accel_iaa.o 00:08:31.939 CC module/accel/dsa/accel_dsa_rpc.o 00:08:31.939 SO libspdk_accel_ioat.so.6.0 00:08:32.197 CC module/keyring/linux/keyring.o 00:08:32.197 SYMLINK libspdk_accel_ioat.so 00:08:32.197 CC module/accel/iaa/accel_iaa_rpc.o 00:08:32.197 CC module/keyring/linux/keyring_rpc.o 00:08:32.197 CC module/bdev/delay/vbdev_delay.o 00:08:32.197 CC module/blobfs/bdev/blobfs_bdev.o 00:08:32.197 CC module/bdev/error/vbdev_error.o 00:08:32.455 CC module/bdev/gpt/gpt.o 00:08:32.455 LIB libspdk_accel_dsa.a 00:08:32.455 LIB libspdk_accel_iaa.a 00:08:32.455 LIB libspdk_keyring_linux.a 00:08:32.455 SO libspdk_accel_dsa.so.5.0 00:08:32.455 SO libspdk_accel_iaa.so.3.0 00:08:32.455 CC module/bdev/lvol/vbdev_lvol.o 00:08:32.455 SO libspdk_keyring_linux.so.1.0 00:08:32.455 LIB libspdk_fsdev_aio.a 00:08:32.455 SYMLINK libspdk_accel_iaa.so 00:08:32.455 SYMLINK libspdk_accel_dsa.so 00:08:32.455 CC module/bdev/gpt/vbdev_gpt.o 00:08:32.455 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:08:32.455 LIB libspdk_sock_posix.a 00:08:32.455 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:08:32.455 SYMLINK libspdk_keyring_linux.so 00:08:32.455 SO libspdk_fsdev_aio.so.1.0 00:08:32.455 CC module/bdev/error/vbdev_error_rpc.o 00:08:32.455 SO libspdk_sock_posix.so.6.0 00:08:32.714 CC module/bdev/delay/vbdev_delay_rpc.o 00:08:32.714 SYMLINK libspdk_fsdev_aio.so 00:08:32.714 SYMLINK libspdk_sock_posix.so 00:08:32.714 LIB libspdk_blobfs_bdev.a 00:08:32.714 LIB libspdk_bdev_error.a 00:08:32.714 SO libspdk_blobfs_bdev.so.6.0 00:08:32.714 SO libspdk_bdev_error.so.6.0 00:08:32.714 LIB libspdk_bdev_delay.a 00:08:32.714 CC module/bdev/nvme/bdev_nvme.o 00:08:32.714 CC module/bdev/null/bdev_null.o 00:08:32.714 CC module/bdev/malloc/bdev_malloc.o 00:08:32.714 LIB libspdk_bdev_gpt.a 00:08:32.714 SYMLINK libspdk_blobfs_bdev.so 00:08:32.714 SO libspdk_bdev_delay.so.6.0 00:08:32.714 SYMLINK libspdk_bdev_error.so 00:08:32.972 SO libspdk_bdev_gpt.so.6.0 00:08:32.972 SYMLINK libspdk_bdev_delay.so 00:08:32.972 SYMLINK libspdk_bdev_gpt.so 00:08:32.972 CC module/bdev/malloc/bdev_malloc_rpc.o 00:08:32.972 CC module/bdev/passthru/vbdev_passthru.o 00:08:32.972 CC module/bdev/raid/bdev_raid.o 00:08:32.972 CC module/bdev/split/vbdev_split.o 00:08:32.972 LIB libspdk_bdev_lvol.a 00:08:32.972 CC module/bdev/zone_block/vbdev_zone_block.o 00:08:33.231 SO libspdk_bdev_lvol.so.6.0 00:08:33.231 CC module/bdev/null/bdev_null_rpc.o 00:08:33.231 CC module/bdev/xnvme/bdev_xnvme.o 00:08:33.231 CC module/bdev/split/vbdev_split_rpc.o 00:08:33.231 SYMLINK libspdk_bdev_lvol.so 00:08:33.231 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:08:33.231 LIB libspdk_bdev_malloc.a 00:08:33.231 CC module/bdev/raid/bdev_raid_rpc.o 00:08:33.231 SO libspdk_bdev_malloc.so.6.0 00:08:33.231 CC module/bdev/raid/bdev_raid_sb.o 00:08:33.231 LIB libspdk_bdev_split.a 00:08:33.489 LIB libspdk_bdev_null.a 00:08:33.489 SYMLINK libspdk_bdev_malloc.so 00:08:33.489 SO libspdk_bdev_split.so.6.0 00:08:33.489 LIB libspdk_bdev_passthru.a 00:08:33.489 SO libspdk_bdev_null.so.6.0 00:08:33.489 SO libspdk_bdev_passthru.so.6.0 00:08:33.489 SYMLINK libspdk_bdev_split.so 00:08:33.489 SYMLINK libspdk_bdev_null.so 00:08:33.489 CC module/bdev/raid/raid0.o 00:08:33.489 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:08:33.489 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:08:33.489 SYMLINK libspdk_bdev_passthru.so 00:08:33.489 CC module/bdev/aio/bdev_aio.o 00:08:33.489 CC module/bdev/aio/bdev_aio_rpc.o 00:08:33.748 CC module/bdev/ftl/bdev_ftl.o 00:08:33.748 CC module/bdev/ftl/bdev_ftl_rpc.o 00:08:33.748 LIB libspdk_bdev_xnvme.a 00:08:33.748 LIB libspdk_bdev_zone_block.a 00:08:33.748 CC module/bdev/iscsi/bdev_iscsi.o 00:08:33.748 SO libspdk_bdev_xnvme.so.3.0 00:08:33.748 SO libspdk_bdev_zone_block.so.6.0 00:08:33.748 CC module/bdev/raid/raid1.o 00:08:33.748 SYMLINK libspdk_bdev_xnvme.so 00:08:33.748 SYMLINK libspdk_bdev_zone_block.so 00:08:33.748 CC module/bdev/raid/concat.o 00:08:33.748 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:08:34.006 CC module/bdev/nvme/bdev_nvme_rpc.o 00:08:34.006 CC module/bdev/nvme/nvme_rpc.o 00:08:34.006 LIB libspdk_bdev_aio.a 00:08:34.006 LIB libspdk_bdev_ftl.a 00:08:34.006 SO libspdk_bdev_aio.so.6.0 00:08:34.006 CC module/bdev/virtio/bdev_virtio_scsi.o 00:08:34.006 SO libspdk_bdev_ftl.so.6.0 00:08:34.264 SYMLINK libspdk_bdev_aio.so 00:08:34.264 SYMLINK libspdk_bdev_ftl.so 00:08:34.264 CC module/bdev/nvme/bdev_mdns_client.o 00:08:34.264 CC module/bdev/virtio/bdev_virtio_blk.o 00:08:34.264 CC module/bdev/virtio/bdev_virtio_rpc.o 00:08:34.264 LIB libspdk_bdev_iscsi.a 00:08:34.264 CC module/bdev/nvme/vbdev_opal.o 00:08:34.264 SO libspdk_bdev_iscsi.so.6.0 00:08:34.264 SYMLINK libspdk_bdev_iscsi.so 00:08:34.264 CC module/bdev/nvme/vbdev_opal_rpc.o 00:08:34.264 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:08:34.264 LIB libspdk_bdev_raid.a 00:08:34.521 SO libspdk_bdev_raid.so.6.0 00:08:34.521 SYMLINK libspdk_bdev_raid.so 00:08:34.779 LIB libspdk_bdev_virtio.a 00:08:34.779 SO libspdk_bdev_virtio.so.6.0 00:08:34.779 SYMLINK libspdk_bdev_virtio.so 00:08:36.682 LIB libspdk_bdev_nvme.a 00:08:36.682 SO libspdk_bdev_nvme.so.7.1 00:08:36.682 SYMLINK libspdk_bdev_nvme.so 00:08:37.249 CC module/event/subsystems/keyring/keyring.o 00:08:37.249 CC module/event/subsystems/iobuf/iobuf.o 00:08:37.249 CC module/event/subsystems/sock/sock.o 00:08:37.249 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:08:37.249 CC module/event/subsystems/vmd/vmd.o 00:08:37.249 CC module/event/subsystems/vmd/vmd_rpc.o 00:08:37.249 CC module/event/subsystems/scheduler/scheduler.o 00:08:37.249 CC module/event/subsystems/fsdev/fsdev.o 00:08:37.249 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:08:37.249 LIB libspdk_event_keyring.a 00:08:37.249 LIB libspdk_event_scheduler.a 00:08:37.249 LIB libspdk_event_sock.a 00:08:37.249 SO libspdk_event_keyring.so.1.0 00:08:37.249 LIB libspdk_event_vhost_blk.a 00:08:37.249 LIB libspdk_event_vmd.a 00:08:37.249 LIB libspdk_event_fsdev.a 00:08:37.249 LIB libspdk_event_iobuf.a 00:08:37.249 SO libspdk_event_scheduler.so.4.0 00:08:37.249 SO libspdk_event_sock.so.5.0 00:08:37.249 SO libspdk_event_vhost_blk.so.3.0 00:08:37.249 SO libspdk_event_fsdev.so.1.0 00:08:37.249 SO libspdk_event_vmd.so.6.0 00:08:37.249 SYMLINK libspdk_event_keyring.so 00:08:37.249 SO libspdk_event_iobuf.so.3.0 00:08:37.507 SYMLINK libspdk_event_scheduler.so 00:08:37.507 SYMLINK libspdk_event_sock.so 00:08:37.507 SYMLINK libspdk_event_fsdev.so 00:08:37.507 SYMLINK libspdk_event_vhost_blk.so 00:08:37.507 SYMLINK libspdk_event_vmd.so 00:08:37.507 SYMLINK libspdk_event_iobuf.so 00:08:37.765 CC module/event/subsystems/accel/accel.o 00:08:37.765 LIB libspdk_event_accel.a 00:08:38.023 SO libspdk_event_accel.so.6.0 00:08:38.023 SYMLINK libspdk_event_accel.so 00:08:38.281 CC module/event/subsystems/bdev/bdev.o 00:08:38.539 LIB libspdk_event_bdev.a 00:08:38.539 SO libspdk_event_bdev.so.6.0 00:08:38.539 SYMLINK libspdk_event_bdev.so 00:08:38.797 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:08:38.797 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:08:38.797 CC module/event/subsystems/ublk/ublk.o 00:08:38.797 CC module/event/subsystems/nbd/nbd.o 00:08:38.797 CC module/event/subsystems/scsi/scsi.o 00:08:39.055 LIB libspdk_event_ublk.a 00:08:39.055 LIB libspdk_event_nbd.a 00:08:39.055 LIB libspdk_event_scsi.a 00:08:39.055 SO libspdk_event_ublk.so.3.0 00:08:39.055 SO libspdk_event_nbd.so.6.0 00:08:39.055 SO libspdk_event_scsi.so.6.0 00:08:39.055 SYMLINK libspdk_event_ublk.so 00:08:39.055 SYMLINK libspdk_event_nbd.so 00:08:39.055 LIB libspdk_event_nvmf.a 00:08:39.314 SYMLINK libspdk_event_scsi.so 00:08:39.314 SO libspdk_event_nvmf.so.6.0 00:08:39.314 SYMLINK libspdk_event_nvmf.so 00:08:39.314 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:08:39.314 CC module/event/subsystems/iscsi/iscsi.o 00:08:39.571 LIB libspdk_event_vhost_scsi.a 00:08:39.571 LIB libspdk_event_iscsi.a 00:08:39.571 SO libspdk_event_vhost_scsi.so.3.0 00:08:39.571 SO libspdk_event_iscsi.so.6.0 00:08:39.830 SYMLINK libspdk_event_vhost_scsi.so 00:08:39.830 SYMLINK libspdk_event_iscsi.so 00:08:39.830 SO libspdk.so.6.0 00:08:39.830 SYMLINK libspdk.so 00:08:40.088 CXX app/trace/trace.o 00:08:40.088 CC app/trace_record/trace_record.o 00:08:40.088 CC app/spdk_lspci/spdk_lspci.o 00:08:40.088 CC examples/interrupt_tgt/interrupt_tgt.o 00:08:40.088 CC app/nvmf_tgt/nvmf_main.o 00:08:40.347 CC app/iscsi_tgt/iscsi_tgt.o 00:08:40.347 CC app/spdk_tgt/spdk_tgt.o 00:08:40.347 CC examples/ioat/perf/perf.o 00:08:40.347 CC examples/util/zipf/zipf.o 00:08:40.347 CC test/thread/poller_perf/poller_perf.o 00:08:40.347 LINK spdk_lspci 00:08:40.347 LINK nvmf_tgt 00:08:40.347 LINK interrupt_tgt 00:08:40.631 LINK poller_perf 00:08:40.631 LINK iscsi_tgt 00:08:40.631 LINK spdk_trace_record 00:08:40.631 LINK zipf 00:08:40.631 LINK spdk_tgt 00:08:40.631 LINK ioat_perf 00:08:40.631 CC app/spdk_nvme_perf/perf.o 00:08:40.631 LINK spdk_trace 00:08:40.889 CC app/spdk_nvme_discover/discovery_aer.o 00:08:40.889 CC app/spdk_nvme_identify/identify.o 00:08:40.889 CC examples/ioat/verify/verify.o 00:08:40.889 CC app/spdk_top/spdk_top.o 00:08:40.889 CC test/dma/test_dma/test_dma.o 00:08:40.889 CC app/spdk_dd/spdk_dd.o 00:08:40.889 CC test/app/bdev_svc/bdev_svc.o 00:08:41.147 CC examples/thread/thread/thread_ex.o 00:08:41.147 LINK spdk_nvme_discover 00:08:41.147 LINK verify 00:08:41.147 CC app/fio/nvme/fio_plugin.o 00:08:41.147 LINK bdev_svc 00:08:41.420 LINK thread 00:08:41.421 LINK spdk_dd 00:08:41.421 CC app/fio/bdev/fio_plugin.o 00:08:41.421 CC examples/sock/hello_world/hello_sock.o 00:08:41.698 LINK test_dma 00:08:41.698 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:08:41.698 CC test/app/histogram_perf/histogram_perf.o 00:08:41.698 CC app/vhost/vhost.o 00:08:41.698 LINK spdk_nvme_perf 00:08:41.698 LINK hello_sock 00:08:41.957 LINK spdk_nvme 00:08:41.957 TEST_HEADER include/spdk/accel.h 00:08:41.957 TEST_HEADER include/spdk/accel_module.h 00:08:41.957 TEST_HEADER include/spdk/assert.h 00:08:41.957 LINK histogram_perf 00:08:41.957 TEST_HEADER include/spdk/barrier.h 00:08:41.957 TEST_HEADER include/spdk/base64.h 00:08:41.957 TEST_HEADER include/spdk/bdev.h 00:08:41.957 TEST_HEADER include/spdk/bdev_module.h 00:08:41.957 TEST_HEADER include/spdk/bdev_zone.h 00:08:41.957 TEST_HEADER include/spdk/bit_array.h 00:08:41.957 TEST_HEADER include/spdk/bit_pool.h 00:08:41.957 TEST_HEADER include/spdk/blob_bdev.h 00:08:41.957 TEST_HEADER include/spdk/blobfs_bdev.h 00:08:41.957 TEST_HEADER include/spdk/blobfs.h 00:08:41.957 TEST_HEADER include/spdk/blob.h 00:08:41.957 TEST_HEADER include/spdk/conf.h 00:08:41.957 TEST_HEADER include/spdk/config.h 00:08:41.957 TEST_HEADER include/spdk/cpuset.h 00:08:41.957 TEST_HEADER include/spdk/crc16.h 00:08:41.957 TEST_HEADER include/spdk/crc32.h 00:08:41.957 TEST_HEADER include/spdk/crc64.h 00:08:41.957 TEST_HEADER include/spdk/dif.h 00:08:41.957 TEST_HEADER include/spdk/dma.h 00:08:41.957 TEST_HEADER include/spdk/endian.h 00:08:41.957 TEST_HEADER include/spdk/env_dpdk.h 00:08:41.957 TEST_HEADER include/spdk/env.h 00:08:41.957 TEST_HEADER include/spdk/event.h 00:08:41.957 TEST_HEADER include/spdk/fd_group.h 00:08:41.957 TEST_HEADER include/spdk/fd.h 00:08:41.957 TEST_HEADER include/spdk/file.h 00:08:41.957 TEST_HEADER include/spdk/fsdev.h 00:08:41.957 TEST_HEADER include/spdk/fsdev_module.h 00:08:41.957 TEST_HEADER include/spdk/ftl.h 00:08:41.957 TEST_HEADER include/spdk/fuse_dispatcher.h 00:08:41.957 TEST_HEADER include/spdk/gpt_spec.h 00:08:41.957 TEST_HEADER include/spdk/hexlify.h 00:08:41.957 TEST_HEADER include/spdk/histogram_data.h 00:08:41.957 TEST_HEADER include/spdk/idxd.h 00:08:41.957 TEST_HEADER include/spdk/idxd_spec.h 00:08:41.957 TEST_HEADER include/spdk/init.h 00:08:41.957 TEST_HEADER include/spdk/ioat.h 00:08:41.957 TEST_HEADER include/spdk/ioat_spec.h 00:08:41.957 TEST_HEADER include/spdk/iscsi_spec.h 00:08:41.957 TEST_HEADER include/spdk/json.h 00:08:41.957 TEST_HEADER include/spdk/jsonrpc.h 00:08:41.957 TEST_HEADER include/spdk/keyring.h 00:08:41.957 TEST_HEADER include/spdk/keyring_module.h 00:08:41.957 TEST_HEADER include/spdk/likely.h 00:08:41.957 TEST_HEADER include/spdk/log.h 00:08:41.957 LINK spdk_nvme_identify 00:08:41.957 TEST_HEADER include/spdk/lvol.h 00:08:41.957 TEST_HEADER include/spdk/md5.h 00:08:41.957 TEST_HEADER include/spdk/memory.h 00:08:41.957 TEST_HEADER include/spdk/mmio.h 00:08:41.957 TEST_HEADER include/spdk/nbd.h 00:08:41.957 TEST_HEADER include/spdk/net.h 00:08:41.957 TEST_HEADER include/spdk/notify.h 00:08:41.957 TEST_HEADER include/spdk/nvme.h 00:08:41.957 TEST_HEADER include/spdk/nvme_intel.h 00:08:41.957 TEST_HEADER include/spdk/nvme_ocssd.h 00:08:41.957 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:08:41.957 TEST_HEADER include/spdk/nvme_spec.h 00:08:41.957 TEST_HEADER include/spdk/nvme_zns.h 00:08:41.957 TEST_HEADER include/spdk/nvmf_cmd.h 00:08:41.957 LINK vhost 00:08:41.957 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:08:41.957 TEST_HEADER include/spdk/nvmf.h 00:08:41.957 LINK spdk_bdev 00:08:41.957 TEST_HEADER include/spdk/nvmf_spec.h 00:08:41.957 TEST_HEADER include/spdk/nvmf_transport.h 00:08:41.957 TEST_HEADER include/spdk/opal.h 00:08:41.957 TEST_HEADER include/spdk/opal_spec.h 00:08:41.957 TEST_HEADER include/spdk/pci_ids.h 00:08:41.957 LINK spdk_top 00:08:41.957 TEST_HEADER include/spdk/pipe.h 00:08:41.957 TEST_HEADER include/spdk/queue.h 00:08:41.957 TEST_HEADER include/spdk/reduce.h 00:08:41.957 TEST_HEADER include/spdk/rpc.h 00:08:41.957 TEST_HEADER include/spdk/scheduler.h 00:08:41.957 TEST_HEADER include/spdk/scsi.h 00:08:41.957 TEST_HEADER include/spdk/scsi_spec.h 00:08:41.957 TEST_HEADER include/spdk/sock.h 00:08:41.957 TEST_HEADER include/spdk/stdinc.h 00:08:41.957 TEST_HEADER include/spdk/string.h 00:08:41.957 TEST_HEADER include/spdk/thread.h 00:08:41.957 TEST_HEADER include/spdk/trace.h 00:08:41.957 TEST_HEADER include/spdk/trace_parser.h 00:08:41.957 TEST_HEADER include/spdk/tree.h 00:08:41.957 TEST_HEADER include/spdk/ublk.h 00:08:41.957 TEST_HEADER include/spdk/util.h 00:08:41.957 TEST_HEADER include/spdk/uuid.h 00:08:41.957 TEST_HEADER include/spdk/version.h 00:08:42.215 TEST_HEADER include/spdk/vfio_user_pci.h 00:08:42.215 TEST_HEADER include/spdk/vfio_user_spec.h 00:08:42.215 TEST_HEADER include/spdk/vhost.h 00:08:42.215 TEST_HEADER include/spdk/vmd.h 00:08:42.215 TEST_HEADER include/spdk/xor.h 00:08:42.215 TEST_HEADER include/spdk/zipf.h 00:08:42.215 CXX test/cpp_headers/accel.o 00:08:42.215 CC test/event/event_perf/event_perf.o 00:08:42.215 CXX test/cpp_headers/accel_module.o 00:08:42.215 CC examples/vmd/lsvmd/lsvmd.o 00:08:42.215 CXX test/cpp_headers/assert.o 00:08:42.215 LINK nvme_fuzz 00:08:42.215 CXX test/cpp_headers/barrier.o 00:08:42.215 CC examples/idxd/perf/perf.o 00:08:42.215 CXX test/cpp_headers/base64.o 00:08:42.215 CC test/env/mem_callbacks/mem_callbacks.o 00:08:42.215 LINK event_perf 00:08:42.215 LINK lsvmd 00:08:42.472 CC test/env/vtophys/vtophys.o 00:08:42.472 CXX test/cpp_headers/bdev.o 00:08:42.472 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:08:42.472 CC test/env/memory/memory_ut.o 00:08:42.472 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:08:42.472 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:08:42.472 CC test/event/reactor/reactor.o 00:08:42.472 CC examples/vmd/led/led.o 00:08:42.472 LINK vtophys 00:08:42.730 LINK idxd_perf 00:08:42.730 LINK env_dpdk_post_init 00:08:42.730 CXX test/cpp_headers/bdev_module.o 00:08:42.730 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:08:42.730 LINK reactor 00:08:42.730 LINK led 00:08:42.988 CC test/env/pci/pci_ut.o 00:08:42.988 LINK mem_callbacks 00:08:42.988 CC test/event/reactor_perf/reactor_perf.o 00:08:42.988 CXX test/cpp_headers/bdev_zone.o 00:08:42.988 CC test/event/app_repeat/app_repeat.o 00:08:42.988 CC test/nvme/aer/aer.o 00:08:42.988 LINK reactor_perf 00:08:43.247 CC test/nvme/reset/reset.o 00:08:43.247 CXX test/cpp_headers/bit_array.o 00:08:43.247 CC examples/fsdev/hello_world/hello_fsdev.o 00:08:43.247 LINK app_repeat 00:08:43.247 LINK vhost_fuzz 00:08:43.247 CC test/rpc_client/rpc_client_test.o 00:08:43.505 CXX test/cpp_headers/bit_pool.o 00:08:43.505 LINK pci_ut 00:08:43.505 LINK aer 00:08:43.505 LINK reset 00:08:43.505 CC test/app/jsoncat/jsoncat.o 00:08:43.505 LINK hello_fsdev 00:08:43.505 LINK rpc_client_test 00:08:43.505 CC test/event/scheduler/scheduler.o 00:08:43.505 CXX test/cpp_headers/blob_bdev.o 00:08:43.763 LINK jsoncat 00:08:43.763 CC test/nvme/sgl/sgl.o 00:08:43.763 CXX test/cpp_headers/blobfs_bdev.o 00:08:43.763 CC examples/accel/perf/accel_perf.o 00:08:43.763 CXX test/cpp_headers/blobfs.o 00:08:43.763 LINK scheduler 00:08:43.763 CC test/app/stub/stub.o 00:08:44.021 LINK memory_ut 00:08:44.021 CC examples/blob/hello_world/hello_blob.o 00:08:44.021 CC test/accel/dif/dif.o 00:08:44.021 CXX test/cpp_headers/blob.o 00:08:44.021 LINK stub 00:08:44.021 LINK sgl 00:08:44.021 CC examples/blob/cli/blobcli.o 00:08:44.278 CC test/nvme/e2edp/nvme_dp.o 00:08:44.278 LINK hello_blob 00:08:44.278 CXX test/cpp_headers/conf.o 00:08:44.278 CXX test/cpp_headers/config.o 00:08:44.278 CC test/blobfs/mkfs/mkfs.o 00:08:44.278 CC test/nvme/err_injection/err_injection.o 00:08:44.278 CC test/nvme/overhead/overhead.o 00:08:44.540 CXX test/cpp_headers/cpuset.o 00:08:44.540 LINK accel_perf 00:08:44.540 LINK mkfs 00:08:44.540 LINK nvme_dp 00:08:44.540 CC test/nvme/startup/startup.o 00:08:44.540 LINK err_injection 00:08:44.540 CXX test/cpp_headers/crc16.o 00:08:44.801 LINK iscsi_fuzz 00:08:44.801 CC test/nvme/reserve/reserve.o 00:08:44.801 LINK overhead 00:08:44.801 LINK blobcli 00:08:44.801 CXX test/cpp_headers/crc32.o 00:08:44.801 LINK startup 00:08:44.801 CC test/nvme/simple_copy/simple_copy.o 00:08:44.801 LINK dif 00:08:44.801 CC test/nvme/connect_stress/connect_stress.o 00:08:45.059 CC test/nvme/boot_partition/boot_partition.o 00:08:45.059 CXX test/cpp_headers/crc64.o 00:08:45.059 LINK reserve 00:08:45.059 CC test/nvme/compliance/nvme_compliance.o 00:08:45.059 CC test/nvme/fused_ordering/fused_ordering.o 00:08:45.059 CC test/nvme/doorbell_aers/doorbell_aers.o 00:08:45.059 LINK simple_copy 00:08:45.059 LINK connect_stress 00:08:45.059 LINK boot_partition 00:08:45.059 CXX test/cpp_headers/dif.o 00:08:45.059 CC examples/nvme/hello_world/hello_world.o 00:08:45.317 CC test/nvme/fdp/fdp.o 00:08:45.317 LINK fused_ordering 00:08:45.317 LINK doorbell_aers 00:08:45.317 CC test/lvol/esnap/esnap.o 00:08:45.317 CXX test/cpp_headers/dma.o 00:08:45.317 CC test/nvme/cuse/cuse.o 00:08:45.575 LINK hello_world 00:08:45.575 LINK nvme_compliance 00:08:45.575 CC examples/bdev/hello_world/hello_bdev.o 00:08:45.575 CXX test/cpp_headers/endian.o 00:08:45.575 CC test/bdev/bdevio/bdevio.o 00:08:45.575 CC examples/nvme/reconnect/reconnect.o 00:08:45.575 CC examples/nvme/nvme_manage/nvme_manage.o 00:08:45.575 LINK fdp 00:08:45.833 CC examples/nvme/arbitration/arbitration.o 00:08:45.833 CC examples/nvme/hotplug/hotplug.o 00:08:45.833 CXX test/cpp_headers/env_dpdk.o 00:08:45.833 LINK hello_bdev 00:08:45.833 CC examples/nvme/cmb_copy/cmb_copy.o 00:08:46.092 CXX test/cpp_headers/env.o 00:08:46.092 LINK reconnect 00:08:46.092 LINK bdevio 00:08:46.092 LINK hotplug 00:08:46.092 CXX test/cpp_headers/event.o 00:08:46.092 CC examples/bdev/bdevperf/bdevperf.o 00:08:46.092 LINK arbitration 00:08:46.092 LINK cmb_copy 00:08:46.350 LINK nvme_manage 00:08:46.350 CXX test/cpp_headers/fd_group.o 00:08:46.350 CC examples/nvme/abort/abort.o 00:08:46.350 CXX test/cpp_headers/fd.o 00:08:46.350 CXX test/cpp_headers/file.o 00:08:46.350 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:08:46.350 CXX test/cpp_headers/fsdev.o 00:08:46.350 CXX test/cpp_headers/fsdev_module.o 00:08:46.350 CXX test/cpp_headers/ftl.o 00:08:46.608 CXX test/cpp_headers/fuse_dispatcher.o 00:08:46.608 CXX test/cpp_headers/gpt_spec.o 00:08:46.608 LINK pmr_persistence 00:08:46.608 CXX test/cpp_headers/hexlify.o 00:08:46.608 CXX test/cpp_headers/histogram_data.o 00:08:46.608 CXX test/cpp_headers/idxd.o 00:08:46.608 CXX test/cpp_headers/idxd_spec.o 00:08:46.608 CXX test/cpp_headers/init.o 00:08:46.865 CXX test/cpp_headers/ioat.o 00:08:46.865 CXX test/cpp_headers/ioat_spec.o 00:08:46.865 LINK abort 00:08:46.865 CXX test/cpp_headers/iscsi_spec.o 00:08:46.865 CXX test/cpp_headers/json.o 00:08:46.865 CXX test/cpp_headers/jsonrpc.o 00:08:46.865 CXX test/cpp_headers/keyring.o 00:08:46.865 CXX test/cpp_headers/keyring_module.o 00:08:46.865 CXX test/cpp_headers/likely.o 00:08:46.865 CXX test/cpp_headers/log.o 00:08:46.865 CXX test/cpp_headers/lvol.o 00:08:47.123 CXX test/cpp_headers/md5.o 00:08:47.123 LINK cuse 00:08:47.123 CXX test/cpp_headers/memory.o 00:08:47.123 CXX test/cpp_headers/mmio.o 00:08:47.123 CXX test/cpp_headers/nbd.o 00:08:47.123 CXX test/cpp_headers/net.o 00:08:47.123 CXX test/cpp_headers/notify.o 00:08:47.123 CXX test/cpp_headers/nvme.o 00:08:47.123 CXX test/cpp_headers/nvme_intel.o 00:08:47.381 LINK bdevperf 00:08:47.381 CXX test/cpp_headers/nvme_ocssd.o 00:08:47.381 CXX test/cpp_headers/nvme_ocssd_spec.o 00:08:47.381 CXX test/cpp_headers/nvme_spec.o 00:08:47.381 CXX test/cpp_headers/nvme_zns.o 00:08:47.381 CXX test/cpp_headers/nvmf_cmd.o 00:08:47.381 CXX test/cpp_headers/nvmf_fc_spec.o 00:08:47.381 CXX test/cpp_headers/nvmf.o 00:08:47.381 CXX test/cpp_headers/nvmf_spec.o 00:08:47.640 CXX test/cpp_headers/nvmf_transport.o 00:08:47.640 CXX test/cpp_headers/opal.o 00:08:47.640 CXX test/cpp_headers/opal_spec.o 00:08:47.640 CXX test/cpp_headers/pci_ids.o 00:08:47.640 CXX test/cpp_headers/pipe.o 00:08:47.640 CXX test/cpp_headers/queue.o 00:08:47.640 CXX test/cpp_headers/reduce.o 00:08:47.640 CXX test/cpp_headers/rpc.o 00:08:47.640 CXX test/cpp_headers/scheduler.o 00:08:47.640 CC examples/nvmf/nvmf/nvmf.o 00:08:47.640 CXX test/cpp_headers/scsi.o 00:08:47.640 CXX test/cpp_headers/scsi_spec.o 00:08:47.640 CXX test/cpp_headers/sock.o 00:08:47.640 CXX test/cpp_headers/stdinc.o 00:08:47.898 CXX test/cpp_headers/string.o 00:08:47.898 CXX test/cpp_headers/thread.o 00:08:47.898 CXX test/cpp_headers/trace.o 00:08:47.898 CXX test/cpp_headers/trace_parser.o 00:08:47.898 CXX test/cpp_headers/tree.o 00:08:47.898 CXX test/cpp_headers/ublk.o 00:08:47.898 CXX test/cpp_headers/util.o 00:08:47.898 CXX test/cpp_headers/uuid.o 00:08:47.898 CXX test/cpp_headers/version.o 00:08:47.898 CXX test/cpp_headers/vfio_user_pci.o 00:08:47.898 CXX test/cpp_headers/vfio_user_spec.o 00:08:47.898 CXX test/cpp_headers/vhost.o 00:08:47.898 CXX test/cpp_headers/vmd.o 00:08:48.156 CXX test/cpp_headers/xor.o 00:08:48.156 LINK nvmf 00:08:48.156 CXX test/cpp_headers/zipf.o 00:08:53.427 LINK esnap 00:08:53.427 00:08:53.427 real 1m48.678s 00:08:53.427 user 9m58.196s 00:08:53.427 sys 1m58.384s 00:08:53.427 ************************************ 00:08:53.427 END TEST make 00:08:53.427 ************************************ 00:08:53.427 10:00:23 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:08:53.427 10:00:23 make -- common/autotest_common.sh@10 -- $ set +x 00:08:53.427 10:00:23 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:08:53.427 10:00:23 -- pm/common@29 -- $ signal_monitor_resources TERM 00:08:53.427 10:00:23 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:08:53.427 10:00:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:53.427 10:00:23 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:08:53.427 10:00:23 -- pm/common@44 -- $ pid=5438 00:08:53.427 10:00:23 -- pm/common@50 -- $ kill -TERM 5438 00:08:53.427 10:00:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:53.427 10:00:23 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:08:53.427 10:00:23 -- pm/common@44 -- $ pid=5440 00:08:53.427 10:00:23 -- pm/common@50 -- $ kill -TERM 5440 00:08:53.427 10:00:23 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:08:53.427 10:00:23 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:08:53.427 10:00:23 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:53.427 10:00:23 -- common/autotest_common.sh@1711 -- # lcov --version 00:08:53.427 10:00:23 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:53.427 10:00:24 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:53.427 10:00:24 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:53.427 10:00:24 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:53.427 10:00:24 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:53.427 10:00:24 -- scripts/common.sh@336 -- # IFS=.-: 00:08:53.427 10:00:24 -- scripts/common.sh@336 -- # read -ra ver1 00:08:53.427 10:00:24 -- scripts/common.sh@337 -- # IFS=.-: 00:08:53.427 10:00:24 -- scripts/common.sh@337 -- # read -ra ver2 00:08:53.427 10:00:24 -- scripts/common.sh@338 -- # local 'op=<' 00:08:53.427 10:00:24 -- scripts/common.sh@340 -- # ver1_l=2 00:08:53.427 10:00:24 -- scripts/common.sh@341 -- # ver2_l=1 00:08:53.427 10:00:24 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:53.427 10:00:24 -- scripts/common.sh@344 -- # case "$op" in 00:08:53.427 10:00:24 -- scripts/common.sh@345 -- # : 1 00:08:53.427 10:00:24 -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:53.427 10:00:24 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:53.427 10:00:24 -- scripts/common.sh@365 -- # decimal 1 00:08:53.427 10:00:24 -- scripts/common.sh@353 -- # local d=1 00:08:53.427 10:00:24 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:53.427 10:00:24 -- scripts/common.sh@355 -- # echo 1 00:08:53.427 10:00:24 -- scripts/common.sh@365 -- # ver1[v]=1 00:08:53.427 10:00:24 -- scripts/common.sh@366 -- # decimal 2 00:08:53.427 10:00:24 -- scripts/common.sh@353 -- # local d=2 00:08:53.427 10:00:24 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:53.427 10:00:24 -- scripts/common.sh@355 -- # echo 2 00:08:53.427 10:00:24 -- scripts/common.sh@366 -- # ver2[v]=2 00:08:53.427 10:00:24 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:53.427 10:00:24 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:53.427 10:00:24 -- scripts/common.sh@368 -- # return 0 00:08:53.427 10:00:24 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:53.427 10:00:24 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:53.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.427 --rc genhtml_branch_coverage=1 00:08:53.427 --rc genhtml_function_coverage=1 00:08:53.427 --rc genhtml_legend=1 00:08:53.427 --rc geninfo_all_blocks=1 00:08:53.427 --rc geninfo_unexecuted_blocks=1 00:08:53.427 00:08:53.427 ' 00:08:53.427 10:00:24 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:53.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.427 --rc genhtml_branch_coverage=1 00:08:53.427 --rc genhtml_function_coverage=1 00:08:53.427 --rc genhtml_legend=1 00:08:53.427 --rc geninfo_all_blocks=1 00:08:53.427 --rc geninfo_unexecuted_blocks=1 00:08:53.427 00:08:53.427 ' 00:08:53.427 10:00:24 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:53.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.427 --rc genhtml_branch_coverage=1 00:08:53.427 --rc genhtml_function_coverage=1 00:08:53.427 --rc genhtml_legend=1 00:08:53.427 --rc geninfo_all_blocks=1 00:08:53.427 --rc geninfo_unexecuted_blocks=1 00:08:53.427 00:08:53.427 ' 00:08:53.427 10:00:24 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:53.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.427 --rc genhtml_branch_coverage=1 00:08:53.427 --rc genhtml_function_coverage=1 00:08:53.427 --rc genhtml_legend=1 00:08:53.427 --rc geninfo_all_blocks=1 00:08:53.427 --rc geninfo_unexecuted_blocks=1 00:08:53.427 00:08:53.427 ' 00:08:53.427 10:00:24 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:53.427 10:00:24 -- nvmf/common.sh@7 -- # uname -s 00:08:53.427 10:00:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:53.427 10:00:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:53.427 10:00:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:53.427 10:00:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:53.427 10:00:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:53.427 10:00:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:53.427 10:00:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:53.427 10:00:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:53.427 10:00:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:53.427 10:00:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:53.427 10:00:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9480b59e-3d5c-4268-b741-40b3738e039b 00:08:53.427 10:00:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=9480b59e-3d5c-4268-b741-40b3738e039b 00:08:53.427 10:00:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:53.427 10:00:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:53.427 10:00:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:53.427 10:00:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:53.427 10:00:24 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:53.427 10:00:24 -- scripts/common.sh@15 -- # shopt -s extglob 00:08:53.427 10:00:24 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:53.427 10:00:24 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:53.427 10:00:24 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:53.427 10:00:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.427 10:00:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.427 10:00:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.427 10:00:24 -- paths/export.sh@5 -- # export PATH 00:08:53.427 10:00:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.428 10:00:24 -- nvmf/common.sh@51 -- # : 0 00:08:53.428 10:00:24 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:53.428 10:00:24 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:53.428 10:00:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:53.428 10:00:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:53.428 10:00:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:53.428 10:00:24 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:53.428 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:53.428 10:00:24 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:53.428 10:00:24 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:53.428 10:00:24 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:53.428 10:00:24 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:08:53.428 10:00:24 -- spdk/autotest.sh@32 -- # uname -s 00:08:53.428 10:00:24 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:08:53.428 10:00:24 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:08:53.428 10:00:24 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:08:53.428 10:00:24 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:08:53.428 10:00:24 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:08:53.428 10:00:24 -- spdk/autotest.sh@44 -- # modprobe nbd 00:08:53.428 10:00:24 -- spdk/autotest.sh@46 -- # type -P udevadm 00:08:53.428 10:00:24 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:08:53.428 10:00:24 -- spdk/autotest.sh@48 -- # udevadm_pid=55119 00:08:53.428 10:00:24 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:08:53.428 10:00:24 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:08:53.428 10:00:24 -- pm/common@17 -- # local monitor 00:08:53.428 10:00:24 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:08:53.428 10:00:24 -- pm/common@21 -- # date +%s 00:08:53.428 10:00:24 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:08:53.428 10:00:24 -- pm/common@25 -- # sleep 1 00:08:53.428 10:00:24 -- pm/common@21 -- # date +%s 00:08:53.428 10:00:24 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733738424 00:08:53.428 10:00:24 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733738424 00:08:53.686 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733738424_collect-cpu-load.pm.log 00:08:53.686 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733738424_collect-vmstat.pm.log 00:08:54.621 10:00:25 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:08:54.621 10:00:25 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:08:54.621 10:00:25 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:54.621 10:00:25 -- common/autotest_common.sh@10 -- # set +x 00:08:54.621 10:00:25 -- spdk/autotest.sh@59 -- # create_test_list 00:08:54.621 10:00:25 -- common/autotest_common.sh@752 -- # xtrace_disable 00:08:54.621 10:00:25 -- common/autotest_common.sh@10 -- # set +x 00:08:54.621 10:00:25 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:08:54.621 10:00:25 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:08:54.621 10:00:25 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:08:54.621 10:00:25 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:08:54.621 10:00:25 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:08:54.621 10:00:25 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:08:54.621 10:00:25 -- common/autotest_common.sh@1457 -- # uname 00:08:54.621 10:00:25 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:08:54.621 10:00:25 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:08:54.621 10:00:25 -- common/autotest_common.sh@1477 -- # uname 00:08:54.621 10:00:25 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:08:54.621 10:00:25 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:08:54.621 10:00:25 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:08:54.621 lcov: LCOV version 1.15 00:08:54.621 10:00:25 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:09:12.702 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:09:12.702 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:09:30.878 10:00:59 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:09:30.878 10:00:59 -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:30.878 10:00:59 -- common/autotest_common.sh@10 -- # set +x 00:09:30.878 10:00:59 -- spdk/autotest.sh@78 -- # rm -f 00:09:30.878 10:00:59 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:30.878 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:30.878 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:09:30.878 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:09:30.878 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:09:30.878 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:09:30.878 10:01:00 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:09:30.878 10:01:00 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:09:30.878 10:01:00 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:09:30.878 10:01:00 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:09:30.878 10:01:00 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:09:30.878 10:01:00 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:09:30.878 10:01:00 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:09:30.878 10:01:00 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:09:30.878 10:01:00 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:09:30.878 10:01:00 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:09:30.878 10:01:00 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:09:30.878 10:01:00 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:09:30.878 10:01:00 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:09:30.878 10:01:00 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:09:30.878 10:01:00 -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:09:30.878 10:01:00 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:09:30.878 10:01:00 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:09:30.878 10:01:00 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:09:30.878 10:01:00 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:09:30.878 10:01:00 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:09:30.878 10:01:00 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:09:30.878 10:01:00 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:09:30.878 10:01:00 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:09:30.878 10:01:00 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:09:30.878 10:01:00 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:09:30.878 10:01:00 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:09:30.878 10:01:00 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3 00:09:30.878 10:01:00 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:09:30.878 10:01:00 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:09:30.878 10:01:00 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:09:30.878 10:01:00 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:09:30.878 10:01:00 -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:09:30.878 10:01:00 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:09:30.878 10:01:00 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2c2n1 00:09:30.878 10:01:00 -- common/autotest_common.sh@1650 -- # local device=nvme2c2n1 00:09:30.878 10:01:00 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2c2n1/queue/zoned ]] 00:09:30.878 10:01:00 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:09:30.878 10:01:00 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:09:30.878 10:01:00 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:09:30.878 10:01:00 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:09:30.878 10:01:00 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3n1 00:09:30.878 10:01:00 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:09:30.878 10:01:00 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:09:30.878 10:01:00 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:09:30.878 10:01:00 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:09:30.878 10:01:00 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:09:30.878 10:01:00 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:09:30.878 10:01:00 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:09:30.878 10:01:00 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:09:30.878 10:01:00 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:09:30.878 No valid GPT data, bailing 00:09:30.878 10:01:00 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:09:30.878 10:01:00 -- scripts/common.sh@394 -- # pt= 00:09:30.878 10:01:00 -- scripts/common.sh@395 -- # return 1 00:09:30.878 10:01:00 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:09:30.878 1+0 records in 00:09:30.878 1+0 records out 00:09:30.878 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00521783 s, 201 MB/s 00:09:30.878 10:01:00 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:09:30.878 10:01:00 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:09:30.878 10:01:00 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:09:30.878 10:01:00 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:09:30.878 10:01:00 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:09:30.878 No valid GPT data, bailing 00:09:30.878 10:01:00 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:09:30.878 10:01:00 -- scripts/common.sh@394 -- # pt= 00:09:30.878 10:01:00 -- scripts/common.sh@395 -- # return 1 00:09:30.878 10:01:00 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:09:30.878 1+0 records in 00:09:30.878 1+0 records out 00:09:30.878 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00509212 s, 206 MB/s 00:09:30.878 10:01:00 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:09:30.878 10:01:00 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:09:30.878 10:01:00 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:09:30.878 10:01:00 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:09:30.878 10:01:00 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:09:30.878 No valid GPT data, bailing 00:09:30.878 10:01:00 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:09:30.878 10:01:00 -- scripts/common.sh@394 -- # pt= 00:09:30.878 10:01:00 -- scripts/common.sh@395 -- # return 1 00:09:30.878 10:01:00 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:09:30.878 1+0 records in 00:09:30.878 1+0 records out 00:09:30.878 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00522735 s, 201 MB/s 00:09:30.878 10:01:01 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:09:30.878 10:01:01 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:09:30.878 10:01:01 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:09:30.878 10:01:01 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:09:30.878 10:01:01 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:09:30.878 No valid GPT data, bailing 00:09:30.878 10:01:01 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:09:30.878 10:01:01 -- scripts/common.sh@394 -- # pt= 00:09:30.878 10:01:01 -- scripts/common.sh@395 -- # return 1 00:09:30.878 10:01:01 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:09:30.878 1+0 records in 00:09:30.878 1+0 records out 00:09:30.878 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00466405 s, 225 MB/s 00:09:30.878 10:01:01 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:09:30.878 10:01:01 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:09:30.878 10:01:01 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:09:30.878 10:01:01 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:09:30.878 10:01:01 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:09:30.878 No valid GPT data, bailing 00:09:30.878 10:01:01 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:09:30.878 10:01:01 -- scripts/common.sh@394 -- # pt= 00:09:30.878 10:01:01 -- scripts/common.sh@395 -- # return 1 00:09:30.878 10:01:01 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:09:30.878 1+0 records in 00:09:30.878 1+0 records out 00:09:30.878 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00481004 s, 218 MB/s 00:09:30.878 10:01:01 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:09:30.878 10:01:01 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:09:30.878 10:01:01 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:09:30.878 10:01:01 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:09:30.878 10:01:01 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:09:30.878 No valid GPT data, bailing 00:09:30.878 10:01:01 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:09:30.878 10:01:01 -- scripts/common.sh@394 -- # pt= 00:09:30.878 10:01:01 -- scripts/common.sh@395 -- # return 1 00:09:30.878 10:01:01 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:09:30.878 1+0 records in 00:09:30.878 1+0 records out 00:09:30.878 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0141396 s, 74.2 MB/s 00:09:30.879 10:01:01 -- spdk/autotest.sh@105 -- # sync 00:09:30.879 10:01:01 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:09:30.879 10:01:01 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:09:30.879 10:01:01 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:09:32.781 10:01:03 -- spdk/autotest.sh@111 -- # uname -s 00:09:32.781 10:01:03 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:09:32.781 10:01:03 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:09:32.781 10:01:03 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:09:33.348 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:33.606 Hugepages 00:09:33.606 node hugesize free / total 00:09:33.606 node0 1048576kB 0 / 0 00:09:33.606 node0 2048kB 0 / 0 00:09:33.606 00:09:33.606 Type BDF Vendor Device NUMA Driver Device Block devices 00:09:33.865 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:09:33.865 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:09:33.865 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:09:33.865 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:09:34.123 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:09:34.123 10:01:04 -- spdk/autotest.sh@117 -- # uname -s 00:09:34.123 10:01:04 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:09:34.123 10:01:04 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:09:34.123 10:01:04 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:34.699 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:35.265 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:35.265 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:35.265 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:35.265 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:35.265 10:01:06 -- common/autotest_common.sh@1517 -- # sleep 1 00:09:36.640 10:01:07 -- common/autotest_common.sh@1518 -- # bdfs=() 00:09:36.640 10:01:07 -- common/autotest_common.sh@1518 -- # local bdfs 00:09:36.640 10:01:07 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:09:36.640 10:01:07 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:09:36.640 10:01:07 -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:36.640 10:01:07 -- common/autotest_common.sh@1498 -- # local bdfs 00:09:36.640 10:01:07 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:36.640 10:01:07 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:36.640 10:01:07 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:36.640 10:01:07 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:36.640 10:01:07 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:36.640 10:01:07 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:36.898 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:36.898 Waiting for block devices as requested 00:09:36.898 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:37.156 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:37.156 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:37.156 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:42.478 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:42.478 10:01:13 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:09:42.478 10:01:13 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:09:42.478 10:01:13 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:09:42.478 10:01:13 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:09:42.478 10:01:13 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:09:42.478 10:01:13 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:09:42.478 10:01:13 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:09:42.478 10:01:13 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:09:42.478 10:01:13 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:09:42.478 10:01:13 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:09:42.478 10:01:13 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:09:42.478 10:01:13 -- common/autotest_common.sh@1531 -- # grep oacs 00:09:42.478 10:01:13 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:09:42.478 10:01:13 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:09:42.478 10:01:13 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:09:42.478 10:01:13 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:09:42.478 10:01:13 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:09:42.478 10:01:13 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:09:42.478 10:01:13 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:09:42.478 10:01:13 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:09:42.478 10:01:13 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:09:42.478 10:01:13 -- common/autotest_common.sh@1543 -- # continue 00:09:42.478 10:01:13 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:09:42.478 10:01:13 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:09:42.478 10:01:13 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:09:42.478 10:01:13 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:09:42.478 10:01:13 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:09:42.478 10:01:13 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:09:42.478 10:01:13 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:09:42.478 10:01:13 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:09:42.478 10:01:13 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:09:42.478 10:01:13 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:09:42.478 10:01:13 -- common/autotest_common.sh@1531 -- # grep oacs 00:09:42.478 10:01:13 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:09:42.478 10:01:13 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:09:42.478 10:01:13 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:09:42.478 10:01:13 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:09:42.478 10:01:13 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:09:42.478 10:01:13 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:09:42.478 10:01:13 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:09:42.478 10:01:13 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:09:42.478 10:01:13 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:09:42.478 10:01:13 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:09:42.478 10:01:13 -- common/autotest_common.sh@1543 -- # continue 00:09:42.478 10:01:13 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:09:42.478 10:01:13 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:09:42.478 10:01:13 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:09:42.478 10:01:13 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:09:42.478 10:01:13 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:09:42.478 10:01:13 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:09:42.478 10:01:13 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:09:42.478 10:01:13 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:09:42.478 10:01:13 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:09:42.478 10:01:13 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:09:42.478 10:01:13 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:09:42.478 10:01:13 -- common/autotest_common.sh@1531 -- # grep oacs 00:09:42.478 10:01:13 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:09:42.478 10:01:13 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:09:42.478 10:01:13 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:09:42.478 10:01:13 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:09:42.478 10:01:13 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:09:42.478 10:01:13 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:09:42.478 10:01:13 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:09:42.478 10:01:13 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:09:42.478 10:01:13 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:09:42.478 10:01:13 -- common/autotest_common.sh@1543 -- # continue 00:09:42.478 10:01:13 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:09:42.479 10:01:13 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:09:42.479 10:01:13 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:09:42.479 10:01:13 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:09:42.479 10:01:13 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:09:42.479 10:01:13 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:09:42.479 10:01:13 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:09:42.479 10:01:13 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:09:42.479 10:01:13 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:09:42.479 10:01:13 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:09:42.479 10:01:13 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:09:42.479 10:01:13 -- common/autotest_common.sh@1531 -- # grep oacs 00:09:42.479 10:01:13 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:09:42.479 10:01:13 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:09:42.479 10:01:13 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:09:42.479 10:01:13 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:09:42.479 10:01:13 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:09:42.479 10:01:13 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:09:42.479 10:01:13 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:09:42.479 10:01:13 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:09:42.479 10:01:13 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:09:42.479 10:01:13 -- common/autotest_common.sh@1543 -- # continue 00:09:42.479 10:01:13 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:09:42.479 10:01:13 -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:42.479 10:01:13 -- common/autotest_common.sh@10 -- # set +x 00:09:42.479 10:01:13 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:09:42.479 10:01:13 -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:42.479 10:01:13 -- common/autotest_common.sh@10 -- # set +x 00:09:42.479 10:01:13 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:43.056 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:43.632 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:43.632 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:43.632 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:43.632 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:43.632 10:01:14 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:09:43.632 10:01:14 -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:43.632 10:01:14 -- common/autotest_common.sh@10 -- # set +x 00:09:43.890 10:01:14 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:09:43.891 10:01:14 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:09:43.891 10:01:14 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:09:43.891 10:01:14 -- common/autotest_common.sh@1563 -- # bdfs=() 00:09:43.891 10:01:14 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:09:43.891 10:01:14 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:09:43.891 10:01:14 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:09:43.891 10:01:14 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:09:43.891 10:01:14 -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:43.891 10:01:14 -- common/autotest_common.sh@1498 -- # local bdfs 00:09:43.891 10:01:14 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:43.891 10:01:14 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:43.891 10:01:14 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:43.891 10:01:14 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:43.891 10:01:14 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:43.891 10:01:14 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:09:43.891 10:01:14 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:09:43.891 10:01:14 -- common/autotest_common.sh@1566 -- # device=0x0010 00:09:43.891 10:01:14 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:09:43.891 10:01:14 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:09:43.891 10:01:14 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:09:43.891 10:01:14 -- common/autotest_common.sh@1566 -- # device=0x0010 00:09:43.891 10:01:14 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:09:43.891 10:01:14 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:09:43.891 10:01:14 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:09:43.891 10:01:14 -- common/autotest_common.sh@1566 -- # device=0x0010 00:09:43.891 10:01:14 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:09:43.891 10:01:14 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:09:43.891 10:01:14 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:09:43.891 10:01:14 -- common/autotest_common.sh@1566 -- # device=0x0010 00:09:43.891 10:01:14 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:09:43.891 10:01:14 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:09:43.891 10:01:14 -- common/autotest_common.sh@1572 -- # return 0 00:09:43.891 10:01:14 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:09:43.891 10:01:14 -- common/autotest_common.sh@1580 -- # return 0 00:09:43.891 10:01:14 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:09:43.891 10:01:14 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:09:43.891 10:01:14 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:09:43.891 10:01:14 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:09:43.891 10:01:14 -- spdk/autotest.sh@149 -- # timing_enter lib 00:09:43.891 10:01:14 -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:43.891 10:01:14 -- common/autotest_common.sh@10 -- # set +x 00:09:43.891 10:01:14 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:09:43.891 10:01:14 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:09:43.891 10:01:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:43.891 10:01:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:43.891 10:01:14 -- common/autotest_common.sh@10 -- # set +x 00:09:43.891 ************************************ 00:09:43.891 START TEST env 00:09:43.891 ************************************ 00:09:43.891 10:01:14 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:09:43.891 * Looking for test storage... 00:09:43.891 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:09:43.891 10:01:14 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:43.891 10:01:14 env -- common/autotest_common.sh@1711 -- # lcov --version 00:09:43.891 10:01:14 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:44.150 10:01:14 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:44.150 10:01:14 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:44.150 10:01:14 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:44.150 10:01:14 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:44.150 10:01:14 env -- scripts/common.sh@336 -- # IFS=.-: 00:09:44.150 10:01:14 env -- scripts/common.sh@336 -- # read -ra ver1 00:09:44.150 10:01:14 env -- scripts/common.sh@337 -- # IFS=.-: 00:09:44.150 10:01:14 env -- scripts/common.sh@337 -- # read -ra ver2 00:09:44.150 10:01:14 env -- scripts/common.sh@338 -- # local 'op=<' 00:09:44.150 10:01:14 env -- scripts/common.sh@340 -- # ver1_l=2 00:09:44.150 10:01:14 env -- scripts/common.sh@341 -- # ver2_l=1 00:09:44.150 10:01:14 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:44.150 10:01:14 env -- scripts/common.sh@344 -- # case "$op" in 00:09:44.150 10:01:14 env -- scripts/common.sh@345 -- # : 1 00:09:44.150 10:01:14 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:44.150 10:01:14 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:44.150 10:01:14 env -- scripts/common.sh@365 -- # decimal 1 00:09:44.150 10:01:14 env -- scripts/common.sh@353 -- # local d=1 00:09:44.150 10:01:14 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:44.150 10:01:14 env -- scripts/common.sh@355 -- # echo 1 00:09:44.150 10:01:14 env -- scripts/common.sh@365 -- # ver1[v]=1 00:09:44.150 10:01:14 env -- scripts/common.sh@366 -- # decimal 2 00:09:44.150 10:01:14 env -- scripts/common.sh@353 -- # local d=2 00:09:44.150 10:01:14 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:44.150 10:01:14 env -- scripts/common.sh@355 -- # echo 2 00:09:44.150 10:01:14 env -- scripts/common.sh@366 -- # ver2[v]=2 00:09:44.150 10:01:14 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:44.150 10:01:14 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:44.150 10:01:14 env -- scripts/common.sh@368 -- # return 0 00:09:44.150 10:01:14 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:44.150 10:01:14 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:44.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.150 --rc genhtml_branch_coverage=1 00:09:44.150 --rc genhtml_function_coverage=1 00:09:44.150 --rc genhtml_legend=1 00:09:44.150 --rc geninfo_all_blocks=1 00:09:44.150 --rc geninfo_unexecuted_blocks=1 00:09:44.150 00:09:44.150 ' 00:09:44.150 10:01:14 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:44.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.150 --rc genhtml_branch_coverage=1 00:09:44.150 --rc genhtml_function_coverage=1 00:09:44.150 --rc genhtml_legend=1 00:09:44.150 --rc geninfo_all_blocks=1 00:09:44.150 --rc geninfo_unexecuted_blocks=1 00:09:44.150 00:09:44.150 ' 00:09:44.150 10:01:14 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:44.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.150 --rc genhtml_branch_coverage=1 00:09:44.150 --rc genhtml_function_coverage=1 00:09:44.150 --rc genhtml_legend=1 00:09:44.150 --rc geninfo_all_blocks=1 00:09:44.150 --rc geninfo_unexecuted_blocks=1 00:09:44.150 00:09:44.150 ' 00:09:44.150 10:01:14 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:44.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.150 --rc genhtml_branch_coverage=1 00:09:44.150 --rc genhtml_function_coverage=1 00:09:44.150 --rc genhtml_legend=1 00:09:44.150 --rc geninfo_all_blocks=1 00:09:44.150 --rc geninfo_unexecuted_blocks=1 00:09:44.150 00:09:44.150 ' 00:09:44.150 10:01:14 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:09:44.150 10:01:14 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:44.150 10:01:14 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:44.150 10:01:14 env -- common/autotest_common.sh@10 -- # set +x 00:09:44.150 ************************************ 00:09:44.150 START TEST env_memory 00:09:44.150 ************************************ 00:09:44.150 10:01:14 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:09:44.150 00:09:44.150 00:09:44.150 CUnit - A unit testing framework for C - Version 2.1-3 00:09:44.150 http://cunit.sourceforge.net/ 00:09:44.150 00:09:44.150 00:09:44.150 Suite: memory 00:09:44.150 Test: alloc and free memory map ...[2024-12-09 10:01:14.831978] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:09:44.150 passed 00:09:44.150 Test: mem map translation ...[2024-12-09 10:01:14.892520] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:09:44.150 [2024-12-09 10:01:14.892643] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:09:44.150 [2024-12-09 10:01:14.892748] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:09:44.150 [2024-12-09 10:01:14.892785] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:09:44.410 passed 00:09:44.410 Test: mem map registration ...[2024-12-09 10:01:15.000448] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:09:44.410 [2024-12-09 10:01:15.000579] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:09:44.410 passed 00:09:44.410 Test: mem map adjacent registrations ...passed 00:09:44.410 00:09:44.410 Run Summary: Type Total Ran Passed Failed Inactive 00:09:44.410 suites 1 1 n/a 0 0 00:09:44.410 tests 4 4 4 0 0 00:09:44.410 asserts 152 152 152 0 n/a 00:09:44.410 00:09:44.410 Elapsed time = 0.353 seconds 00:09:44.410 00:09:44.410 real 0m0.403s 00:09:44.410 user 0m0.359s 00:09:44.410 sys 0m0.034s 00:09:44.410 10:01:15 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:44.410 10:01:15 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:09:44.410 ************************************ 00:09:44.410 END TEST env_memory 00:09:44.410 ************************************ 00:09:44.410 10:01:15 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:09:44.410 10:01:15 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:44.410 10:01:15 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:44.410 10:01:15 env -- common/autotest_common.sh@10 -- # set +x 00:09:44.669 ************************************ 00:09:44.669 START TEST env_vtophys 00:09:44.669 ************************************ 00:09:44.669 10:01:15 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:09:44.669 EAL: lib.eal log level changed from notice to debug 00:09:44.669 EAL: Detected lcore 0 as core 0 on socket 0 00:09:44.669 EAL: Detected lcore 1 as core 0 on socket 0 00:09:44.669 EAL: Detected lcore 2 as core 0 on socket 0 00:09:44.669 EAL: Detected lcore 3 as core 0 on socket 0 00:09:44.669 EAL: Detected lcore 4 as core 0 on socket 0 00:09:44.669 EAL: Detected lcore 5 as core 0 on socket 0 00:09:44.669 EAL: Detected lcore 6 as core 0 on socket 0 00:09:44.669 EAL: Detected lcore 7 as core 0 on socket 0 00:09:44.669 EAL: Detected lcore 8 as core 0 on socket 0 00:09:44.669 EAL: Detected lcore 9 as core 0 on socket 0 00:09:44.669 EAL: Maximum logical cores by configuration: 128 00:09:44.669 EAL: Detected CPU lcores: 10 00:09:44.669 EAL: Detected NUMA nodes: 1 00:09:44.669 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:09:44.669 EAL: Detected shared linkage of DPDK 00:09:44.669 EAL: No shared files mode enabled, IPC will be disabled 00:09:44.669 EAL: Selected IOVA mode 'PA' 00:09:44.669 EAL: Probing VFIO support... 00:09:44.669 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:09:44.669 EAL: VFIO modules not loaded, skipping VFIO support... 00:09:44.669 EAL: Ask a virtual area of 0x2e000 bytes 00:09:44.669 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:09:44.669 EAL: Setting up physically contiguous memory... 00:09:44.669 EAL: Setting maximum number of open files to 524288 00:09:44.669 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:09:44.669 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:09:44.669 EAL: Ask a virtual area of 0x61000 bytes 00:09:44.669 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:09:44.669 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:44.669 EAL: Ask a virtual area of 0x400000000 bytes 00:09:44.669 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:09:44.669 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:09:44.669 EAL: Ask a virtual area of 0x61000 bytes 00:09:44.669 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:09:44.669 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:44.669 EAL: Ask a virtual area of 0x400000000 bytes 00:09:44.669 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:09:44.669 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:09:44.669 EAL: Ask a virtual area of 0x61000 bytes 00:09:44.669 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:09:44.669 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:44.669 EAL: Ask a virtual area of 0x400000000 bytes 00:09:44.669 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:09:44.669 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:09:44.670 EAL: Ask a virtual area of 0x61000 bytes 00:09:44.670 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:09:44.670 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:44.670 EAL: Ask a virtual area of 0x400000000 bytes 00:09:44.670 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:09:44.670 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:09:44.670 EAL: Hugepages will be freed exactly as allocated. 00:09:44.670 EAL: No shared files mode enabled, IPC is disabled 00:09:44.670 EAL: No shared files mode enabled, IPC is disabled 00:09:44.670 EAL: TSC frequency is ~2200000 KHz 00:09:44.670 EAL: Main lcore 0 is ready (tid=7fed5c6d3a40;cpuset=[0]) 00:09:44.670 EAL: Trying to obtain current memory policy. 00:09:44.670 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:44.670 EAL: Restoring previous memory policy: 0 00:09:44.670 EAL: request: mp_malloc_sync 00:09:44.670 EAL: No shared files mode enabled, IPC is disabled 00:09:44.670 EAL: Heap on socket 0 was expanded by 2MB 00:09:44.670 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:09:44.670 EAL: No PCI address specified using 'addr=' in: bus=pci 00:09:44.670 EAL: Mem event callback 'spdk:(nil)' registered 00:09:44.670 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:09:44.670 00:09:44.670 00:09:44.670 CUnit - A unit testing framework for C - Version 2.1-3 00:09:44.670 http://cunit.sourceforge.net/ 00:09:44.670 00:09:44.670 00:09:44.670 Suite: components_suite 00:09:45.239 Test: vtophys_malloc_test ...passed 00:09:45.239 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:09:45.239 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:45.239 EAL: Restoring previous memory policy: 4 00:09:45.239 EAL: Calling mem event callback 'spdk:(nil)' 00:09:45.239 EAL: request: mp_malloc_sync 00:09:45.239 EAL: No shared files mode enabled, IPC is disabled 00:09:45.239 EAL: Heap on socket 0 was expanded by 4MB 00:09:45.239 EAL: Calling mem event callback 'spdk:(nil)' 00:09:45.239 EAL: request: mp_malloc_sync 00:09:45.239 EAL: No shared files mode enabled, IPC is disabled 00:09:45.239 EAL: Heap on socket 0 was shrunk by 4MB 00:09:45.239 EAL: Trying to obtain current memory policy. 00:09:45.239 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:45.239 EAL: Restoring previous memory policy: 4 00:09:45.239 EAL: Calling mem event callback 'spdk:(nil)' 00:09:45.239 EAL: request: mp_malloc_sync 00:09:45.239 EAL: No shared files mode enabled, IPC is disabled 00:09:45.239 EAL: Heap on socket 0 was expanded by 6MB 00:09:45.239 EAL: Calling mem event callback 'spdk:(nil)' 00:09:45.239 EAL: request: mp_malloc_sync 00:09:45.239 EAL: No shared files mode enabled, IPC is disabled 00:09:45.239 EAL: Heap on socket 0 was shrunk by 6MB 00:09:45.239 EAL: Trying to obtain current memory policy. 00:09:45.239 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:45.239 EAL: Restoring previous memory policy: 4 00:09:45.239 EAL: Calling mem event callback 'spdk:(nil)' 00:09:45.239 EAL: request: mp_malloc_sync 00:09:45.239 EAL: No shared files mode enabled, IPC is disabled 00:09:45.239 EAL: Heap on socket 0 was expanded by 10MB 00:09:45.239 EAL: Calling mem event callback 'spdk:(nil)' 00:09:45.239 EAL: request: mp_malloc_sync 00:09:45.239 EAL: No shared files mode enabled, IPC is disabled 00:09:45.239 EAL: Heap on socket 0 was shrunk by 10MB 00:09:45.239 EAL: Trying to obtain current memory policy. 00:09:45.239 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:45.239 EAL: Restoring previous memory policy: 4 00:09:45.239 EAL: Calling mem event callback 'spdk:(nil)' 00:09:45.239 EAL: request: mp_malloc_sync 00:09:45.239 EAL: No shared files mode enabled, IPC is disabled 00:09:45.239 EAL: Heap on socket 0 was expanded by 18MB 00:09:45.498 EAL: Calling mem event callback 'spdk:(nil)' 00:09:45.498 EAL: request: mp_malloc_sync 00:09:45.498 EAL: No shared files mode enabled, IPC is disabled 00:09:45.498 EAL: Heap on socket 0 was shrunk by 18MB 00:09:45.498 EAL: Trying to obtain current memory policy. 00:09:45.498 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:45.498 EAL: Restoring previous memory policy: 4 00:09:45.498 EAL: Calling mem event callback 'spdk:(nil)' 00:09:45.498 EAL: request: mp_malloc_sync 00:09:45.498 EAL: No shared files mode enabled, IPC is disabled 00:09:45.498 EAL: Heap on socket 0 was expanded by 34MB 00:09:45.498 EAL: Calling mem event callback 'spdk:(nil)' 00:09:45.498 EAL: request: mp_malloc_sync 00:09:45.498 EAL: No shared files mode enabled, IPC is disabled 00:09:45.498 EAL: Heap on socket 0 was shrunk by 34MB 00:09:45.498 EAL: Trying to obtain current memory policy. 00:09:45.498 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:45.498 EAL: Restoring previous memory policy: 4 00:09:45.498 EAL: Calling mem event callback 'spdk:(nil)' 00:09:45.498 EAL: request: mp_malloc_sync 00:09:45.498 EAL: No shared files mode enabled, IPC is disabled 00:09:45.498 EAL: Heap on socket 0 was expanded by 66MB 00:09:45.757 EAL: Calling mem event callback 'spdk:(nil)' 00:09:45.757 EAL: request: mp_malloc_sync 00:09:45.757 EAL: No shared files mode enabled, IPC is disabled 00:09:45.757 EAL: Heap on socket 0 was shrunk by 66MB 00:09:45.757 EAL: Trying to obtain current memory policy. 00:09:45.757 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:45.757 EAL: Restoring previous memory policy: 4 00:09:45.757 EAL: Calling mem event callback 'spdk:(nil)' 00:09:45.757 EAL: request: mp_malloc_sync 00:09:45.757 EAL: No shared files mode enabled, IPC is disabled 00:09:45.757 EAL: Heap on socket 0 was expanded by 130MB 00:09:46.016 EAL: Calling mem event callback 'spdk:(nil)' 00:09:46.016 EAL: request: mp_malloc_sync 00:09:46.016 EAL: No shared files mode enabled, IPC is disabled 00:09:46.016 EAL: Heap on socket 0 was shrunk by 130MB 00:09:46.275 EAL: Trying to obtain current memory policy. 00:09:46.275 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:46.275 EAL: Restoring previous memory policy: 4 00:09:46.275 EAL: Calling mem event callback 'spdk:(nil)' 00:09:46.275 EAL: request: mp_malloc_sync 00:09:46.275 EAL: No shared files mode enabled, IPC is disabled 00:09:46.275 EAL: Heap on socket 0 was expanded by 258MB 00:09:46.843 EAL: Calling mem event callback 'spdk:(nil)' 00:09:46.843 EAL: request: mp_malloc_sync 00:09:46.843 EAL: No shared files mode enabled, IPC is disabled 00:09:46.843 EAL: Heap on socket 0 was shrunk by 258MB 00:09:47.411 EAL: Trying to obtain current memory policy. 00:09:47.411 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:47.411 EAL: Restoring previous memory policy: 4 00:09:47.411 EAL: Calling mem event callback 'spdk:(nil)' 00:09:47.411 EAL: request: mp_malloc_sync 00:09:47.411 EAL: No shared files mode enabled, IPC is disabled 00:09:47.411 EAL: Heap on socket 0 was expanded by 514MB 00:09:48.507 EAL: Calling mem event callback 'spdk:(nil)' 00:09:48.507 EAL: request: mp_malloc_sync 00:09:48.507 EAL: No shared files mode enabled, IPC is disabled 00:09:48.507 EAL: Heap on socket 0 was shrunk by 514MB 00:09:49.444 EAL: Trying to obtain current memory policy. 00:09:49.444 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:49.702 EAL: Restoring previous memory policy: 4 00:09:49.702 EAL: Calling mem event callback 'spdk:(nil)' 00:09:49.702 EAL: request: mp_malloc_sync 00:09:49.702 EAL: No shared files mode enabled, IPC is disabled 00:09:49.702 EAL: Heap on socket 0 was expanded by 1026MB 00:09:51.606 EAL: Calling mem event callback 'spdk:(nil)' 00:09:51.606 EAL: request: mp_malloc_sync 00:09:51.606 EAL: No shared files mode enabled, IPC is disabled 00:09:51.606 EAL: Heap on socket 0 was shrunk by 1026MB 00:09:53.507 passed 00:09:53.507 00:09:53.507 Run Summary: Type Total Ran Passed Failed Inactive 00:09:53.507 suites 1 1 n/a 0 0 00:09:53.507 tests 2 2 2 0 0 00:09:53.507 asserts 5677 5677 5677 0 n/a 00:09:53.507 00:09:53.507 Elapsed time = 8.287 seconds 00:09:53.507 EAL: Calling mem event callback 'spdk:(nil)' 00:09:53.507 EAL: request: mp_malloc_sync 00:09:53.507 EAL: No shared files mode enabled, IPC is disabled 00:09:53.507 EAL: Heap on socket 0 was shrunk by 2MB 00:09:53.507 EAL: No shared files mode enabled, IPC is disabled 00:09:53.507 EAL: No shared files mode enabled, IPC is disabled 00:09:53.507 EAL: No shared files mode enabled, IPC is disabled 00:09:53.507 00:09:53.507 real 0m8.659s 00:09:53.507 user 0m7.192s 00:09:53.507 sys 0m1.279s 00:09:53.507 10:01:23 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:53.507 10:01:23 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:09:53.507 ************************************ 00:09:53.507 END TEST env_vtophys 00:09:53.507 ************************************ 00:09:53.507 10:01:23 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:09:53.507 10:01:23 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:53.507 10:01:23 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:53.507 10:01:23 env -- common/autotest_common.sh@10 -- # set +x 00:09:53.507 ************************************ 00:09:53.507 START TEST env_pci 00:09:53.507 ************************************ 00:09:53.507 10:01:23 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:09:53.507 00:09:53.507 00:09:53.507 CUnit - A unit testing framework for C - Version 2.1-3 00:09:53.507 http://cunit.sourceforge.net/ 00:09:53.507 00:09:53.507 00:09:53.507 Suite: pci 00:09:53.507 Test: pci_hook ...[2024-12-09 10:01:23.960254] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57971 has claimed it 00:09:53.507 passed 00:09:53.507 00:09:53.507 Run Summary: Type Total Ran Passed Failed Inactive 00:09:53.507 suites 1 1 n/a 0 0 00:09:53.507 tests 1 1 1 0 0 00:09:53.507 asserts 25 25 25 0 n/a 00:09:53.507 00:09:53.507 Elapsed time = 0.009 seconds 00:09:53.507 EAL: Cannot find device (10000:00:01.0) 00:09:53.507 EAL: Failed to attach device on primary process 00:09:53.507 00:09:53.507 real 0m0.083s 00:09:53.507 user 0m0.037s 00:09:53.507 sys 0m0.045s 00:09:53.507 10:01:24 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:53.507 10:01:24 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:09:53.507 ************************************ 00:09:53.507 END TEST env_pci 00:09:53.507 ************************************ 00:09:53.507 10:01:24 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:09:53.507 10:01:24 env -- env/env.sh@15 -- # uname 00:09:53.507 10:01:24 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:09:53.507 10:01:24 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:09:53.507 10:01:24 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:53.507 10:01:24 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:53.507 10:01:24 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:53.507 10:01:24 env -- common/autotest_common.sh@10 -- # set +x 00:09:53.507 ************************************ 00:09:53.507 START TEST env_dpdk_post_init 00:09:53.507 ************************************ 00:09:53.507 10:01:24 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:53.507 EAL: Detected CPU lcores: 10 00:09:53.507 EAL: Detected NUMA nodes: 1 00:09:53.507 EAL: Detected shared linkage of DPDK 00:09:53.507 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:53.507 EAL: Selected IOVA mode 'PA' 00:09:53.507 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:53.766 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:09:53.766 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:09:53.766 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:09:53.766 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:09:53.766 Starting DPDK initialization... 00:09:53.766 Starting SPDK post initialization... 00:09:53.766 SPDK NVMe probe 00:09:53.766 Attaching to 0000:00:10.0 00:09:53.766 Attaching to 0000:00:11.0 00:09:53.766 Attaching to 0000:00:12.0 00:09:53.766 Attaching to 0000:00:13.0 00:09:53.766 Attached to 0000:00:10.0 00:09:53.766 Attached to 0000:00:11.0 00:09:53.766 Attached to 0000:00:13.0 00:09:53.766 Attached to 0000:00:12.0 00:09:53.766 Cleaning up... 00:09:53.766 00:09:53.766 real 0m0.320s 00:09:53.766 user 0m0.109s 00:09:53.766 sys 0m0.108s 00:09:53.766 10:01:24 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:53.766 10:01:24 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:09:53.766 ************************************ 00:09:53.766 END TEST env_dpdk_post_init 00:09:53.766 ************************************ 00:09:53.766 10:01:24 env -- env/env.sh@26 -- # uname 00:09:53.766 10:01:24 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:09:53.766 10:01:24 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:09:53.766 10:01:24 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:53.766 10:01:24 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:53.766 10:01:24 env -- common/autotest_common.sh@10 -- # set +x 00:09:53.766 ************************************ 00:09:53.766 START TEST env_mem_callbacks 00:09:53.766 ************************************ 00:09:53.766 10:01:24 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:09:53.766 EAL: Detected CPU lcores: 10 00:09:53.766 EAL: Detected NUMA nodes: 1 00:09:53.766 EAL: Detected shared linkage of DPDK 00:09:53.766 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:53.766 EAL: Selected IOVA mode 'PA' 00:09:54.024 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:54.024 00:09:54.024 00:09:54.024 CUnit - A unit testing framework for C - Version 2.1-3 00:09:54.024 http://cunit.sourceforge.net/ 00:09:54.024 00:09:54.024 00:09:54.024 Suite: memory 00:09:54.024 Test: test ... 00:09:54.024 register 0x200000200000 2097152 00:09:54.024 malloc 3145728 00:09:54.024 register 0x200000400000 4194304 00:09:54.024 buf 0x2000004fffc0 len 3145728 PASSED 00:09:54.024 malloc 64 00:09:54.024 buf 0x2000004ffec0 len 64 PASSED 00:09:54.024 malloc 4194304 00:09:54.024 register 0x200000800000 6291456 00:09:54.024 buf 0x2000009fffc0 len 4194304 PASSED 00:09:54.024 free 0x2000004fffc0 3145728 00:09:54.024 free 0x2000004ffec0 64 00:09:54.024 unregister 0x200000400000 4194304 PASSED 00:09:54.024 free 0x2000009fffc0 4194304 00:09:54.024 unregister 0x200000800000 6291456 PASSED 00:09:54.024 malloc 8388608 00:09:54.024 register 0x200000400000 10485760 00:09:54.024 buf 0x2000005fffc0 len 8388608 PASSED 00:09:54.024 free 0x2000005fffc0 8388608 00:09:54.024 unregister 0x200000400000 10485760 PASSED 00:09:54.024 passed 00:09:54.024 00:09:54.025 Run Summary: Type Total Ran Passed Failed Inactive 00:09:54.025 suites 1 1 n/a 0 0 00:09:54.025 tests 1 1 1 0 0 00:09:54.025 asserts 15 15 15 0 n/a 00:09:54.025 00:09:54.025 Elapsed time = 0.064 seconds 00:09:54.025 00:09:54.025 real 0m0.273s 00:09:54.025 user 0m0.106s 00:09:54.025 sys 0m0.066s 00:09:54.025 10:01:24 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:54.025 10:01:24 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:09:54.025 ************************************ 00:09:54.025 END TEST env_mem_callbacks 00:09:54.025 ************************************ 00:09:54.025 00:09:54.025 real 0m10.181s 00:09:54.025 user 0m7.989s 00:09:54.025 sys 0m1.778s 00:09:54.025 10:01:24 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:54.025 ************************************ 00:09:54.025 END TEST env 00:09:54.025 10:01:24 env -- common/autotest_common.sh@10 -- # set +x 00:09:54.025 ************************************ 00:09:54.025 10:01:24 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:09:54.025 10:01:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:54.025 10:01:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:54.025 10:01:24 -- common/autotest_common.sh@10 -- # set +x 00:09:54.025 ************************************ 00:09:54.025 START TEST rpc 00:09:54.025 ************************************ 00:09:54.025 10:01:24 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:09:54.283 * Looking for test storage... 00:09:54.283 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:09:54.283 10:01:24 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:54.283 10:01:24 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:09:54.283 10:01:24 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:54.283 10:01:24 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:54.283 10:01:24 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:54.283 10:01:24 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:54.283 10:01:24 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:54.284 10:01:24 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:54.284 10:01:24 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:54.284 10:01:24 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:54.284 10:01:24 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:54.284 10:01:24 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:54.284 10:01:24 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:54.284 10:01:24 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:54.284 10:01:24 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:54.284 10:01:24 rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:54.284 10:01:24 rpc -- scripts/common.sh@345 -- # : 1 00:09:54.284 10:01:24 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:54.284 10:01:24 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:54.284 10:01:24 rpc -- scripts/common.sh@365 -- # decimal 1 00:09:54.284 10:01:24 rpc -- scripts/common.sh@353 -- # local d=1 00:09:54.284 10:01:24 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:54.284 10:01:24 rpc -- scripts/common.sh@355 -- # echo 1 00:09:54.284 10:01:24 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:54.284 10:01:24 rpc -- scripts/common.sh@366 -- # decimal 2 00:09:54.284 10:01:24 rpc -- scripts/common.sh@353 -- # local d=2 00:09:54.284 10:01:24 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:54.284 10:01:24 rpc -- scripts/common.sh@355 -- # echo 2 00:09:54.284 10:01:24 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:54.284 10:01:24 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:54.284 10:01:24 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:54.284 10:01:24 rpc -- scripts/common.sh@368 -- # return 0 00:09:54.284 10:01:24 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:54.284 10:01:24 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:54.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.284 --rc genhtml_branch_coverage=1 00:09:54.284 --rc genhtml_function_coverage=1 00:09:54.284 --rc genhtml_legend=1 00:09:54.284 --rc geninfo_all_blocks=1 00:09:54.284 --rc geninfo_unexecuted_blocks=1 00:09:54.284 00:09:54.284 ' 00:09:54.284 10:01:24 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:54.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.284 --rc genhtml_branch_coverage=1 00:09:54.284 --rc genhtml_function_coverage=1 00:09:54.284 --rc genhtml_legend=1 00:09:54.284 --rc geninfo_all_blocks=1 00:09:54.284 --rc geninfo_unexecuted_blocks=1 00:09:54.284 00:09:54.284 ' 00:09:54.284 10:01:24 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:54.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.284 --rc genhtml_branch_coverage=1 00:09:54.284 --rc genhtml_function_coverage=1 00:09:54.284 --rc genhtml_legend=1 00:09:54.284 --rc geninfo_all_blocks=1 00:09:54.284 --rc geninfo_unexecuted_blocks=1 00:09:54.284 00:09:54.284 ' 00:09:54.284 10:01:24 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:54.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.284 --rc genhtml_branch_coverage=1 00:09:54.284 --rc genhtml_function_coverage=1 00:09:54.284 --rc genhtml_legend=1 00:09:54.284 --rc geninfo_all_blocks=1 00:09:54.284 --rc geninfo_unexecuted_blocks=1 00:09:54.284 00:09:54.284 ' 00:09:54.284 10:01:24 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58098 00:09:54.284 10:01:24 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:09:54.284 10:01:24 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:54.284 10:01:24 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58098 00:09:54.284 10:01:24 rpc -- common/autotest_common.sh@835 -- # '[' -z 58098 ']' 00:09:54.284 10:01:24 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:54.284 10:01:24 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:54.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:54.284 10:01:24 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:54.284 10:01:24 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:54.284 10:01:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:54.542 [2024-12-09 10:01:25.130290] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:09:54.542 [2024-12-09 10:01:25.130474] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58098 ] 00:09:54.542 [2024-12-09 10:01:25.317361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.800 [2024-12-09 10:01:25.466774] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:09:54.800 [2024-12-09 10:01:25.466881] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58098' to capture a snapshot of events at runtime. 00:09:54.800 [2024-12-09 10:01:25.466900] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:54.800 [2024-12-09 10:01:25.466917] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:54.800 [2024-12-09 10:01:25.466928] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58098 for offline analysis/debug. 00:09:54.800 [2024-12-09 10:01:25.468313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.735 10:01:26 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:55.735 10:01:26 rpc -- common/autotest_common.sh@868 -- # return 0 00:09:55.735 10:01:26 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:09:55.735 10:01:26 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:09:55.735 10:01:26 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:09:55.735 10:01:26 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:09:55.735 10:01:26 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:55.735 10:01:26 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:55.735 10:01:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:55.735 ************************************ 00:09:55.735 START TEST rpc_integrity 00:09:55.735 ************************************ 00:09:55.735 10:01:26 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:09:55.735 10:01:26 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:55.735 10:01:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.735 10:01:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:55.735 10:01:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.735 10:01:26 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:55.735 10:01:26 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:09:55.735 10:01:26 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:55.735 10:01:26 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:55.735 10:01:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.735 10:01:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:55.993 10:01:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.993 10:01:26 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:09:55.993 10:01:26 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:55.993 10:01:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.993 10:01:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:55.993 10:01:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.993 10:01:26 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:55.993 { 00:09:55.993 "name": "Malloc0", 00:09:55.993 "aliases": [ 00:09:55.993 "836f7087-5d15-43bc-b6ed-7d3f90d3e0dc" 00:09:55.993 ], 00:09:55.993 "product_name": "Malloc disk", 00:09:55.993 "block_size": 512, 00:09:55.993 "num_blocks": 16384, 00:09:55.993 "uuid": "836f7087-5d15-43bc-b6ed-7d3f90d3e0dc", 00:09:55.993 "assigned_rate_limits": { 00:09:55.993 "rw_ios_per_sec": 0, 00:09:55.993 "rw_mbytes_per_sec": 0, 00:09:55.993 "r_mbytes_per_sec": 0, 00:09:55.993 "w_mbytes_per_sec": 0 00:09:55.993 }, 00:09:55.993 "claimed": false, 00:09:55.993 "zoned": false, 00:09:55.993 "supported_io_types": { 00:09:55.993 "read": true, 00:09:55.993 "write": true, 00:09:55.993 "unmap": true, 00:09:55.993 "flush": true, 00:09:55.993 "reset": true, 00:09:55.993 "nvme_admin": false, 00:09:55.993 "nvme_io": false, 00:09:55.993 "nvme_io_md": false, 00:09:55.993 "write_zeroes": true, 00:09:55.993 "zcopy": true, 00:09:55.993 "get_zone_info": false, 00:09:55.993 "zone_management": false, 00:09:55.993 "zone_append": false, 00:09:55.993 "compare": false, 00:09:55.993 "compare_and_write": false, 00:09:55.993 "abort": true, 00:09:55.993 "seek_hole": false, 00:09:55.993 "seek_data": false, 00:09:55.993 "copy": true, 00:09:55.993 "nvme_iov_md": false 00:09:55.993 }, 00:09:55.993 "memory_domains": [ 00:09:55.993 { 00:09:55.993 "dma_device_id": "system", 00:09:55.993 "dma_device_type": 1 00:09:55.993 }, 00:09:55.993 { 00:09:55.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.993 "dma_device_type": 2 00:09:55.993 } 00:09:55.993 ], 00:09:55.993 "driver_specific": {} 00:09:55.993 } 00:09:55.993 ]' 00:09:55.993 10:01:26 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:09:55.993 10:01:26 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:55.993 10:01:26 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:09:55.993 10:01:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.993 10:01:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:55.993 [2024-12-09 10:01:26.628372] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:09:55.993 [2024-12-09 10:01:26.628466] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:55.993 [2024-12-09 10:01:26.628513] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:55.993 [2024-12-09 10:01:26.628533] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:55.993 [2024-12-09 10:01:26.631876] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:55.993 [2024-12-09 10:01:26.631934] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:55.993 Passthru0 00:09:55.993 10:01:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.993 10:01:26 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:55.993 10:01:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.993 10:01:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:55.993 10:01:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.994 10:01:26 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:55.994 { 00:09:55.994 "name": "Malloc0", 00:09:55.994 "aliases": [ 00:09:55.994 "836f7087-5d15-43bc-b6ed-7d3f90d3e0dc" 00:09:55.994 ], 00:09:55.994 "product_name": "Malloc disk", 00:09:55.994 "block_size": 512, 00:09:55.994 "num_blocks": 16384, 00:09:55.994 "uuid": "836f7087-5d15-43bc-b6ed-7d3f90d3e0dc", 00:09:55.994 "assigned_rate_limits": { 00:09:55.994 "rw_ios_per_sec": 0, 00:09:55.994 "rw_mbytes_per_sec": 0, 00:09:55.994 "r_mbytes_per_sec": 0, 00:09:55.994 "w_mbytes_per_sec": 0 00:09:55.994 }, 00:09:55.994 "claimed": true, 00:09:55.994 "claim_type": "exclusive_write", 00:09:55.994 "zoned": false, 00:09:55.994 "supported_io_types": { 00:09:55.994 "read": true, 00:09:55.994 "write": true, 00:09:55.994 "unmap": true, 00:09:55.994 "flush": true, 00:09:55.994 "reset": true, 00:09:55.994 "nvme_admin": false, 00:09:55.994 "nvme_io": false, 00:09:55.994 "nvme_io_md": false, 00:09:55.994 "write_zeroes": true, 00:09:55.994 "zcopy": true, 00:09:55.994 "get_zone_info": false, 00:09:55.994 "zone_management": false, 00:09:55.994 "zone_append": false, 00:09:55.994 "compare": false, 00:09:55.994 "compare_and_write": false, 00:09:55.994 "abort": true, 00:09:55.994 "seek_hole": false, 00:09:55.994 "seek_data": false, 00:09:55.994 "copy": true, 00:09:55.994 "nvme_iov_md": false 00:09:55.994 }, 00:09:55.994 "memory_domains": [ 00:09:55.994 { 00:09:55.994 "dma_device_id": "system", 00:09:55.994 "dma_device_type": 1 00:09:55.994 }, 00:09:55.994 { 00:09:55.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.994 "dma_device_type": 2 00:09:55.994 } 00:09:55.994 ], 00:09:55.994 "driver_specific": {} 00:09:55.994 }, 00:09:55.994 { 00:09:55.994 "name": "Passthru0", 00:09:55.994 "aliases": [ 00:09:55.994 "0ef7605b-6ab2-5f3e-ad4c-5602cda8b31e" 00:09:55.994 ], 00:09:55.994 "product_name": "passthru", 00:09:55.994 "block_size": 512, 00:09:55.994 "num_blocks": 16384, 00:09:55.994 "uuid": "0ef7605b-6ab2-5f3e-ad4c-5602cda8b31e", 00:09:55.994 "assigned_rate_limits": { 00:09:55.994 "rw_ios_per_sec": 0, 00:09:55.994 "rw_mbytes_per_sec": 0, 00:09:55.994 "r_mbytes_per_sec": 0, 00:09:55.994 "w_mbytes_per_sec": 0 00:09:55.994 }, 00:09:55.994 "claimed": false, 00:09:55.994 "zoned": false, 00:09:55.994 "supported_io_types": { 00:09:55.994 "read": true, 00:09:55.994 "write": true, 00:09:55.994 "unmap": true, 00:09:55.994 "flush": true, 00:09:55.994 "reset": true, 00:09:55.994 "nvme_admin": false, 00:09:55.994 "nvme_io": false, 00:09:55.994 "nvme_io_md": false, 00:09:55.994 "write_zeroes": true, 00:09:55.994 "zcopy": true, 00:09:55.994 "get_zone_info": false, 00:09:55.994 "zone_management": false, 00:09:55.994 "zone_append": false, 00:09:55.994 "compare": false, 00:09:55.994 "compare_and_write": false, 00:09:55.994 "abort": true, 00:09:55.994 "seek_hole": false, 00:09:55.994 "seek_data": false, 00:09:55.994 "copy": true, 00:09:55.994 "nvme_iov_md": false 00:09:55.994 }, 00:09:55.994 "memory_domains": [ 00:09:55.994 { 00:09:55.994 "dma_device_id": "system", 00:09:55.994 "dma_device_type": 1 00:09:55.994 }, 00:09:55.994 { 00:09:55.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.994 "dma_device_type": 2 00:09:55.994 } 00:09:55.994 ], 00:09:55.994 "driver_specific": { 00:09:55.994 "passthru": { 00:09:55.994 "name": "Passthru0", 00:09:55.994 "base_bdev_name": "Malloc0" 00:09:55.994 } 00:09:55.994 } 00:09:55.994 } 00:09:55.994 ]' 00:09:55.994 10:01:26 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:09:55.994 10:01:26 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:55.994 10:01:26 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:55.994 10:01:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.994 10:01:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:55.994 10:01:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.994 10:01:26 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:09:55.994 10:01:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.994 10:01:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:55.994 10:01:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.994 10:01:26 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:55.994 10:01:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.994 10:01:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:55.994 10:01:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.994 10:01:26 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:55.994 10:01:26 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:09:56.252 10:01:26 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:56.252 00:09:56.252 real 0m0.356s 00:09:56.252 user 0m0.211s 00:09:56.252 sys 0m0.047s 00:09:56.252 10:01:26 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:56.252 ************************************ 00:09:56.252 10:01:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:56.252 END TEST rpc_integrity 00:09:56.252 ************************************ 00:09:56.252 10:01:26 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:09:56.252 10:01:26 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:56.252 10:01:26 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:56.252 10:01:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:56.252 ************************************ 00:09:56.252 START TEST rpc_plugins 00:09:56.252 ************************************ 00:09:56.252 10:01:26 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:09:56.252 10:01:26 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:09:56.252 10:01:26 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.252 10:01:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:56.252 10:01:26 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.252 10:01:26 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:09:56.252 10:01:26 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:09:56.252 10:01:26 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.252 10:01:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:56.252 10:01:26 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.252 10:01:26 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:09:56.252 { 00:09:56.252 "name": "Malloc1", 00:09:56.252 "aliases": [ 00:09:56.252 "bc5c77cb-b02a-42b2-ad83-547b6c606281" 00:09:56.252 ], 00:09:56.252 "product_name": "Malloc disk", 00:09:56.252 "block_size": 4096, 00:09:56.252 "num_blocks": 256, 00:09:56.252 "uuid": "bc5c77cb-b02a-42b2-ad83-547b6c606281", 00:09:56.252 "assigned_rate_limits": { 00:09:56.252 "rw_ios_per_sec": 0, 00:09:56.252 "rw_mbytes_per_sec": 0, 00:09:56.252 "r_mbytes_per_sec": 0, 00:09:56.252 "w_mbytes_per_sec": 0 00:09:56.252 }, 00:09:56.252 "claimed": false, 00:09:56.252 "zoned": false, 00:09:56.252 "supported_io_types": { 00:09:56.252 "read": true, 00:09:56.252 "write": true, 00:09:56.252 "unmap": true, 00:09:56.252 "flush": true, 00:09:56.252 "reset": true, 00:09:56.252 "nvme_admin": false, 00:09:56.252 "nvme_io": false, 00:09:56.252 "nvme_io_md": false, 00:09:56.252 "write_zeroes": true, 00:09:56.252 "zcopy": true, 00:09:56.252 "get_zone_info": false, 00:09:56.252 "zone_management": false, 00:09:56.252 "zone_append": false, 00:09:56.252 "compare": false, 00:09:56.252 "compare_and_write": false, 00:09:56.252 "abort": true, 00:09:56.252 "seek_hole": false, 00:09:56.252 "seek_data": false, 00:09:56.252 "copy": true, 00:09:56.252 "nvme_iov_md": false 00:09:56.252 }, 00:09:56.252 "memory_domains": [ 00:09:56.252 { 00:09:56.252 "dma_device_id": "system", 00:09:56.252 "dma_device_type": 1 00:09:56.252 }, 00:09:56.252 { 00:09:56.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.252 "dma_device_type": 2 00:09:56.252 } 00:09:56.252 ], 00:09:56.252 "driver_specific": {} 00:09:56.252 } 00:09:56.252 ]' 00:09:56.252 10:01:26 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:09:56.253 10:01:26 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:09:56.253 10:01:26 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:09:56.253 10:01:26 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.253 10:01:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:56.253 10:01:26 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.253 10:01:26 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:09:56.253 10:01:26 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.253 10:01:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:56.253 10:01:26 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.253 10:01:26 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:09:56.253 10:01:26 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:09:56.253 10:01:27 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:09:56.253 00:09:56.253 real 0m0.162s 00:09:56.253 user 0m0.098s 00:09:56.253 sys 0m0.024s 00:09:56.253 10:01:27 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:56.253 ************************************ 00:09:56.253 END TEST rpc_plugins 00:09:56.253 10:01:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:56.253 ************************************ 00:09:56.510 10:01:27 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:09:56.510 10:01:27 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:56.510 10:01:27 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:56.510 10:01:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:56.510 ************************************ 00:09:56.510 START TEST rpc_trace_cmd_test 00:09:56.510 ************************************ 00:09:56.510 10:01:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:09:56.510 10:01:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:09:56.510 10:01:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:09:56.510 10:01:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.510 10:01:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.510 10:01:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.510 10:01:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:09:56.510 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58098", 00:09:56.510 "tpoint_group_mask": "0x8", 00:09:56.510 "iscsi_conn": { 00:09:56.510 "mask": "0x2", 00:09:56.510 "tpoint_mask": "0x0" 00:09:56.510 }, 00:09:56.510 "scsi": { 00:09:56.510 "mask": "0x4", 00:09:56.510 "tpoint_mask": "0x0" 00:09:56.510 }, 00:09:56.510 "bdev": { 00:09:56.510 "mask": "0x8", 00:09:56.510 "tpoint_mask": "0xffffffffffffffff" 00:09:56.510 }, 00:09:56.510 "nvmf_rdma": { 00:09:56.510 "mask": "0x10", 00:09:56.510 "tpoint_mask": "0x0" 00:09:56.510 }, 00:09:56.510 "nvmf_tcp": { 00:09:56.510 "mask": "0x20", 00:09:56.510 "tpoint_mask": "0x0" 00:09:56.510 }, 00:09:56.510 "ftl": { 00:09:56.510 "mask": "0x40", 00:09:56.510 "tpoint_mask": "0x0" 00:09:56.510 }, 00:09:56.510 "blobfs": { 00:09:56.510 "mask": "0x80", 00:09:56.510 "tpoint_mask": "0x0" 00:09:56.510 }, 00:09:56.510 "dsa": { 00:09:56.510 "mask": "0x200", 00:09:56.510 "tpoint_mask": "0x0" 00:09:56.510 }, 00:09:56.510 "thread": { 00:09:56.510 "mask": "0x400", 00:09:56.510 "tpoint_mask": "0x0" 00:09:56.510 }, 00:09:56.510 "nvme_pcie": { 00:09:56.510 "mask": "0x800", 00:09:56.510 "tpoint_mask": "0x0" 00:09:56.510 }, 00:09:56.510 "iaa": { 00:09:56.510 "mask": "0x1000", 00:09:56.510 "tpoint_mask": "0x0" 00:09:56.510 }, 00:09:56.510 "nvme_tcp": { 00:09:56.510 "mask": "0x2000", 00:09:56.510 "tpoint_mask": "0x0" 00:09:56.510 }, 00:09:56.510 "bdev_nvme": { 00:09:56.510 "mask": "0x4000", 00:09:56.510 "tpoint_mask": "0x0" 00:09:56.510 }, 00:09:56.510 "sock": { 00:09:56.510 "mask": "0x8000", 00:09:56.510 "tpoint_mask": "0x0" 00:09:56.510 }, 00:09:56.510 "blob": { 00:09:56.510 "mask": "0x10000", 00:09:56.510 "tpoint_mask": "0x0" 00:09:56.510 }, 00:09:56.510 "bdev_raid": { 00:09:56.510 "mask": "0x20000", 00:09:56.510 "tpoint_mask": "0x0" 00:09:56.510 }, 00:09:56.510 "scheduler": { 00:09:56.510 "mask": "0x40000", 00:09:56.510 "tpoint_mask": "0x0" 00:09:56.510 } 00:09:56.510 }' 00:09:56.510 10:01:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:09:56.510 10:01:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:09:56.510 10:01:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:09:56.510 10:01:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:09:56.510 10:01:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:09:56.510 10:01:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:09:56.510 10:01:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:09:56.768 10:01:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:09:56.768 10:01:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:09:56.768 10:01:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:09:56.768 00:09:56.768 real 0m0.292s 00:09:56.768 user 0m0.256s 00:09:56.768 sys 0m0.022s 00:09:56.768 10:01:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:56.768 10:01:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.768 ************************************ 00:09:56.768 END TEST rpc_trace_cmd_test 00:09:56.768 ************************************ 00:09:56.768 10:01:27 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:09:56.768 10:01:27 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:09:56.769 10:01:27 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:09:56.769 10:01:27 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:56.769 10:01:27 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:56.769 10:01:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:56.769 ************************************ 00:09:56.769 START TEST rpc_daemon_integrity 00:09:56.769 ************************************ 00:09:56.769 10:01:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:09:56.769 10:01:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:56.769 10:01:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.769 10:01:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:56.769 10:01:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.769 10:01:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:56.769 10:01:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:09:56.769 10:01:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:56.769 10:01:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:56.769 10:01:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.769 10:01:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:56.769 10:01:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.769 10:01:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:09:56.769 10:01:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:56.769 10:01:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.769 10:01:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:56.769 10:01:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.769 10:01:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:56.769 { 00:09:56.769 "name": "Malloc2", 00:09:56.769 "aliases": [ 00:09:56.769 "725b876f-014a-495a-9d1a-166f86a0b316" 00:09:56.769 ], 00:09:56.769 "product_name": "Malloc disk", 00:09:56.769 "block_size": 512, 00:09:56.769 "num_blocks": 16384, 00:09:56.769 "uuid": "725b876f-014a-495a-9d1a-166f86a0b316", 00:09:56.769 "assigned_rate_limits": { 00:09:56.769 "rw_ios_per_sec": 0, 00:09:56.769 "rw_mbytes_per_sec": 0, 00:09:56.769 "r_mbytes_per_sec": 0, 00:09:56.769 "w_mbytes_per_sec": 0 00:09:56.769 }, 00:09:56.769 "claimed": false, 00:09:56.769 "zoned": false, 00:09:56.769 "supported_io_types": { 00:09:56.769 "read": true, 00:09:56.769 "write": true, 00:09:56.769 "unmap": true, 00:09:56.769 "flush": true, 00:09:56.769 "reset": true, 00:09:56.769 "nvme_admin": false, 00:09:56.769 "nvme_io": false, 00:09:56.769 "nvme_io_md": false, 00:09:56.769 "write_zeroes": true, 00:09:56.769 "zcopy": true, 00:09:56.769 "get_zone_info": false, 00:09:56.769 "zone_management": false, 00:09:56.769 "zone_append": false, 00:09:56.769 "compare": false, 00:09:56.769 "compare_and_write": false, 00:09:56.769 "abort": true, 00:09:56.769 "seek_hole": false, 00:09:56.769 "seek_data": false, 00:09:56.769 "copy": true, 00:09:56.769 "nvme_iov_md": false 00:09:56.769 }, 00:09:56.769 "memory_domains": [ 00:09:56.769 { 00:09:56.769 "dma_device_id": "system", 00:09:56.769 "dma_device_type": 1 00:09:56.769 }, 00:09:56.769 { 00:09:56.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.769 "dma_device_type": 2 00:09:56.769 } 00:09:56.769 ], 00:09:56.769 "driver_specific": {} 00:09:56.769 } 00:09:56.769 ]' 00:09:56.769 10:01:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:09:57.027 10:01:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:57.027 10:01:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:09:57.027 10:01:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.027 10:01:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:57.027 [2024-12-09 10:01:27.591872] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:09:57.027 [2024-12-09 10:01:27.591965] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:57.027 [2024-12-09 10:01:27.592003] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:57.027 [2024-12-09 10:01:27.592022] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:57.027 [2024-12-09 10:01:27.595293] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:57.027 [2024-12-09 10:01:27.595348] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:57.027 Passthru0 00:09:57.027 10:01:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.027 10:01:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:57.027 10:01:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.027 10:01:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:57.027 10:01:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.027 10:01:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:57.027 { 00:09:57.027 "name": "Malloc2", 00:09:57.027 "aliases": [ 00:09:57.027 "725b876f-014a-495a-9d1a-166f86a0b316" 00:09:57.027 ], 00:09:57.027 "product_name": "Malloc disk", 00:09:57.027 "block_size": 512, 00:09:57.027 "num_blocks": 16384, 00:09:57.027 "uuid": "725b876f-014a-495a-9d1a-166f86a0b316", 00:09:57.027 "assigned_rate_limits": { 00:09:57.027 "rw_ios_per_sec": 0, 00:09:57.027 "rw_mbytes_per_sec": 0, 00:09:57.027 "r_mbytes_per_sec": 0, 00:09:57.027 "w_mbytes_per_sec": 0 00:09:57.027 }, 00:09:57.027 "claimed": true, 00:09:57.027 "claim_type": "exclusive_write", 00:09:57.027 "zoned": false, 00:09:57.027 "supported_io_types": { 00:09:57.027 "read": true, 00:09:57.027 "write": true, 00:09:57.027 "unmap": true, 00:09:57.027 "flush": true, 00:09:57.027 "reset": true, 00:09:57.027 "nvme_admin": false, 00:09:57.027 "nvme_io": false, 00:09:57.027 "nvme_io_md": false, 00:09:57.027 "write_zeroes": true, 00:09:57.027 "zcopy": true, 00:09:57.027 "get_zone_info": false, 00:09:57.027 "zone_management": false, 00:09:57.027 "zone_append": false, 00:09:57.027 "compare": false, 00:09:57.027 "compare_and_write": false, 00:09:57.027 "abort": true, 00:09:57.027 "seek_hole": false, 00:09:57.027 "seek_data": false, 00:09:57.027 "copy": true, 00:09:57.027 "nvme_iov_md": false 00:09:57.027 }, 00:09:57.027 "memory_domains": [ 00:09:57.027 { 00:09:57.027 "dma_device_id": "system", 00:09:57.027 "dma_device_type": 1 00:09:57.027 }, 00:09:57.027 { 00:09:57.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.027 "dma_device_type": 2 00:09:57.027 } 00:09:57.027 ], 00:09:57.027 "driver_specific": {} 00:09:57.027 }, 00:09:57.027 { 00:09:57.027 "name": "Passthru0", 00:09:57.027 "aliases": [ 00:09:57.027 "1bbb5899-664b-55ea-8ebf-65417da30346" 00:09:57.027 ], 00:09:57.027 "product_name": "passthru", 00:09:57.027 "block_size": 512, 00:09:57.027 "num_blocks": 16384, 00:09:57.027 "uuid": "1bbb5899-664b-55ea-8ebf-65417da30346", 00:09:57.027 "assigned_rate_limits": { 00:09:57.027 "rw_ios_per_sec": 0, 00:09:57.027 "rw_mbytes_per_sec": 0, 00:09:57.027 "r_mbytes_per_sec": 0, 00:09:57.027 "w_mbytes_per_sec": 0 00:09:57.027 }, 00:09:57.027 "claimed": false, 00:09:57.027 "zoned": false, 00:09:57.027 "supported_io_types": { 00:09:57.027 "read": true, 00:09:57.027 "write": true, 00:09:57.027 "unmap": true, 00:09:57.027 "flush": true, 00:09:57.027 "reset": true, 00:09:57.027 "nvme_admin": false, 00:09:57.027 "nvme_io": false, 00:09:57.027 "nvme_io_md": false, 00:09:57.027 "write_zeroes": true, 00:09:57.027 "zcopy": true, 00:09:57.027 "get_zone_info": false, 00:09:57.027 "zone_management": false, 00:09:57.027 "zone_append": false, 00:09:57.027 "compare": false, 00:09:57.027 "compare_and_write": false, 00:09:57.027 "abort": true, 00:09:57.027 "seek_hole": false, 00:09:57.027 "seek_data": false, 00:09:57.027 "copy": true, 00:09:57.027 "nvme_iov_md": false 00:09:57.027 }, 00:09:57.027 "memory_domains": [ 00:09:57.027 { 00:09:57.027 "dma_device_id": "system", 00:09:57.027 "dma_device_type": 1 00:09:57.027 }, 00:09:57.027 { 00:09:57.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.027 "dma_device_type": 2 00:09:57.027 } 00:09:57.027 ], 00:09:57.027 "driver_specific": { 00:09:57.027 "passthru": { 00:09:57.027 "name": "Passthru0", 00:09:57.027 "base_bdev_name": "Malloc2" 00:09:57.027 } 00:09:57.027 } 00:09:57.027 } 00:09:57.027 ]' 00:09:57.028 10:01:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:09:57.028 10:01:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:57.028 10:01:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:57.028 10:01:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.028 10:01:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:57.028 10:01:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.028 10:01:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:09:57.028 10:01:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.028 10:01:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:57.028 10:01:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.028 10:01:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:57.028 10:01:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.028 10:01:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:57.028 10:01:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.028 10:01:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:57.028 10:01:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:09:57.028 10:01:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:57.028 00:09:57.028 real 0m0.344s 00:09:57.028 user 0m0.208s 00:09:57.028 sys 0m0.040s 00:09:57.028 10:01:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:57.028 10:01:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:57.028 ************************************ 00:09:57.028 END TEST rpc_daemon_integrity 00:09:57.028 ************************************ 00:09:57.028 10:01:27 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:09:57.028 10:01:27 rpc -- rpc/rpc.sh@84 -- # killprocess 58098 00:09:57.028 10:01:27 rpc -- common/autotest_common.sh@954 -- # '[' -z 58098 ']' 00:09:57.028 10:01:27 rpc -- common/autotest_common.sh@958 -- # kill -0 58098 00:09:57.028 10:01:27 rpc -- common/autotest_common.sh@959 -- # uname 00:09:57.028 10:01:27 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:57.286 10:01:27 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58098 00:09:57.286 10:01:27 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:57.286 10:01:27 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:57.286 killing process with pid 58098 00:09:57.286 10:01:27 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58098' 00:09:57.286 10:01:27 rpc -- common/autotest_common.sh@973 -- # kill 58098 00:09:57.286 10:01:27 rpc -- common/autotest_common.sh@978 -- # wait 58098 00:09:59.855 00:09:59.855 real 0m5.634s 00:09:59.855 user 0m6.163s 00:09:59.855 sys 0m1.036s 00:09:59.855 10:01:30 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:59.855 10:01:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:59.855 ************************************ 00:09:59.855 END TEST rpc 00:09:59.855 ************************************ 00:09:59.855 10:01:30 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:09:59.855 10:01:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:59.855 10:01:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:59.855 10:01:30 -- common/autotest_common.sh@10 -- # set +x 00:09:59.855 ************************************ 00:09:59.855 START TEST skip_rpc 00:09:59.855 ************************************ 00:09:59.855 10:01:30 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:09:59.855 * Looking for test storage... 00:09:59.855 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:09:59.855 10:01:30 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:59.855 10:01:30 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:09:59.855 10:01:30 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:59.855 10:01:30 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:59.855 10:01:30 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:00.113 10:01:30 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:00.113 10:01:30 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:00.113 10:01:30 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:00.113 10:01:30 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:00.113 10:01:30 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:00.113 10:01:30 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:00.113 10:01:30 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:00.113 10:01:30 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:00.113 10:01:30 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:00.113 10:01:30 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:00.113 10:01:30 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:00.113 10:01:30 skip_rpc -- scripts/common.sh@345 -- # : 1 00:10:00.113 10:01:30 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:00.113 10:01:30 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:00.113 10:01:30 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:00.113 10:01:30 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:10:00.113 10:01:30 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:00.113 10:01:30 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:10:00.113 10:01:30 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:00.113 10:01:30 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:00.113 10:01:30 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:10:00.113 10:01:30 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:00.113 10:01:30 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:10:00.113 10:01:30 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:00.113 10:01:30 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:00.113 10:01:30 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:00.113 10:01:30 skip_rpc -- scripts/common.sh@368 -- # return 0 00:10:00.113 10:01:30 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:00.113 10:01:30 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:00.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.113 --rc genhtml_branch_coverage=1 00:10:00.113 --rc genhtml_function_coverage=1 00:10:00.113 --rc genhtml_legend=1 00:10:00.113 --rc geninfo_all_blocks=1 00:10:00.114 --rc geninfo_unexecuted_blocks=1 00:10:00.114 00:10:00.114 ' 00:10:00.114 10:01:30 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:00.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.114 --rc genhtml_branch_coverage=1 00:10:00.114 --rc genhtml_function_coverage=1 00:10:00.114 --rc genhtml_legend=1 00:10:00.114 --rc geninfo_all_blocks=1 00:10:00.114 --rc geninfo_unexecuted_blocks=1 00:10:00.114 00:10:00.114 ' 00:10:00.114 10:01:30 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:00.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.114 --rc genhtml_branch_coverage=1 00:10:00.114 --rc genhtml_function_coverage=1 00:10:00.114 --rc genhtml_legend=1 00:10:00.114 --rc geninfo_all_blocks=1 00:10:00.114 --rc geninfo_unexecuted_blocks=1 00:10:00.114 00:10:00.114 ' 00:10:00.114 10:01:30 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:00.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.114 --rc genhtml_branch_coverage=1 00:10:00.114 --rc genhtml_function_coverage=1 00:10:00.114 --rc genhtml_legend=1 00:10:00.114 --rc geninfo_all_blocks=1 00:10:00.114 --rc geninfo_unexecuted_blocks=1 00:10:00.114 00:10:00.114 ' 00:10:00.114 10:01:30 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:10:00.114 10:01:30 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:10:00.114 10:01:30 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:10:00.114 10:01:30 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:00.114 10:01:30 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:00.114 10:01:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:00.114 ************************************ 00:10:00.114 START TEST skip_rpc 00:10:00.114 ************************************ 00:10:00.114 10:01:30 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:10:00.114 10:01:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58332 00:10:00.114 10:01:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:00.114 10:01:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:10:00.114 10:01:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:10:00.114 [2024-12-09 10:01:30.828244] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:10:00.114 [2024-12-09 10:01:30.829268] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58332 ] 00:10:00.372 [2024-12-09 10:01:31.017100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.631 [2024-12-09 10:01:31.209668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.899 10:01:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:10:05.899 10:01:35 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:10:05.899 10:01:35 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:10:05.899 10:01:35 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:05.899 10:01:35 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:05.899 10:01:35 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:05.899 10:01:35 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:05.899 10:01:35 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:10:05.899 10:01:35 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.899 10:01:35 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.899 10:01:35 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:05.899 10:01:35 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:10:05.899 10:01:35 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:05.899 10:01:35 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:05.899 10:01:35 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:05.899 10:01:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:10:05.899 10:01:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58332 00:10:05.899 10:01:35 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 58332 ']' 00:10:05.899 10:01:35 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 58332 00:10:05.899 10:01:35 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:10:05.899 10:01:35 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:05.899 10:01:35 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58332 00:10:05.899 killing process with pid 58332 00:10:05.899 10:01:35 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:05.899 10:01:35 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:05.899 10:01:35 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58332' 00:10:05.899 10:01:35 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 58332 00:10:05.899 10:01:35 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 58332 00:10:07.804 ************************************ 00:10:07.804 END TEST skip_rpc 00:10:07.804 ************************************ 00:10:07.804 00:10:07.804 real 0m7.624s 00:10:07.804 user 0m6.936s 00:10:07.804 sys 0m0.579s 00:10:07.804 10:01:38 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:07.804 10:01:38 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:07.804 10:01:38 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:10:07.804 10:01:38 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:07.804 10:01:38 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:07.804 10:01:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:07.804 ************************************ 00:10:07.804 START TEST skip_rpc_with_json 00:10:07.804 ************************************ 00:10:07.804 10:01:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:10:07.804 10:01:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:10:07.804 10:01:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58442 00:10:07.804 10:01:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:07.804 10:01:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:07.804 10:01:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58442 00:10:07.804 10:01:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 58442 ']' 00:10:07.804 10:01:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:07.804 10:01:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:07.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:07.804 10:01:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:07.804 10:01:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:07.804 10:01:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:07.804 [2024-12-09 10:01:38.516245] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:10:07.804 [2024-12-09 10:01:38.516514] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58442 ] 00:10:08.063 [2024-12-09 10:01:38.710197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.321 [2024-12-09 10:01:38.868247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.256 10:01:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:09.256 10:01:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:10:09.256 10:01:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:10:09.256 10:01:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.256 10:01:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:09.256 [2024-12-09 10:01:39.913202] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:10:09.256 request: 00:10:09.256 { 00:10:09.256 "trtype": "tcp", 00:10:09.256 "method": "nvmf_get_transports", 00:10:09.256 "req_id": 1 00:10:09.256 } 00:10:09.256 Got JSON-RPC error response 00:10:09.256 response: 00:10:09.256 { 00:10:09.256 "code": -19, 00:10:09.256 "message": "No such device" 00:10:09.256 } 00:10:09.256 10:01:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:09.256 10:01:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:10:09.256 10:01:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.256 10:01:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:09.256 [2024-12-09 10:01:39.925391] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:09.256 10:01:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.256 10:01:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:10:09.256 10:01:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.256 10:01:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:09.516 10:01:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.516 10:01:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:10:09.516 { 00:10:09.516 "subsystems": [ 00:10:09.516 { 00:10:09.516 "subsystem": "fsdev", 00:10:09.516 "config": [ 00:10:09.516 { 00:10:09.516 "method": "fsdev_set_opts", 00:10:09.516 "params": { 00:10:09.516 "fsdev_io_pool_size": 65535, 00:10:09.516 "fsdev_io_cache_size": 256 00:10:09.516 } 00:10:09.516 } 00:10:09.516 ] 00:10:09.516 }, 00:10:09.516 { 00:10:09.516 "subsystem": "keyring", 00:10:09.516 "config": [] 00:10:09.516 }, 00:10:09.516 { 00:10:09.516 "subsystem": "iobuf", 00:10:09.516 "config": [ 00:10:09.516 { 00:10:09.516 "method": "iobuf_set_options", 00:10:09.516 "params": { 00:10:09.516 "small_pool_count": 8192, 00:10:09.516 "large_pool_count": 1024, 00:10:09.516 "small_bufsize": 8192, 00:10:09.516 "large_bufsize": 135168, 00:10:09.516 "enable_numa": false 00:10:09.516 } 00:10:09.516 } 00:10:09.516 ] 00:10:09.516 }, 00:10:09.516 { 00:10:09.516 "subsystem": "sock", 00:10:09.516 "config": [ 00:10:09.516 { 00:10:09.516 "method": "sock_set_default_impl", 00:10:09.516 "params": { 00:10:09.516 "impl_name": "posix" 00:10:09.516 } 00:10:09.516 }, 00:10:09.516 { 00:10:09.516 "method": "sock_impl_set_options", 00:10:09.516 "params": { 00:10:09.516 "impl_name": "ssl", 00:10:09.516 "recv_buf_size": 4096, 00:10:09.516 "send_buf_size": 4096, 00:10:09.516 "enable_recv_pipe": true, 00:10:09.516 "enable_quickack": false, 00:10:09.516 "enable_placement_id": 0, 00:10:09.516 "enable_zerocopy_send_server": true, 00:10:09.516 "enable_zerocopy_send_client": false, 00:10:09.516 "zerocopy_threshold": 0, 00:10:09.516 "tls_version": 0, 00:10:09.516 "enable_ktls": false 00:10:09.516 } 00:10:09.516 }, 00:10:09.516 { 00:10:09.516 "method": "sock_impl_set_options", 00:10:09.516 "params": { 00:10:09.516 "impl_name": "posix", 00:10:09.516 "recv_buf_size": 2097152, 00:10:09.516 "send_buf_size": 2097152, 00:10:09.516 "enable_recv_pipe": true, 00:10:09.516 "enable_quickack": false, 00:10:09.516 "enable_placement_id": 0, 00:10:09.516 "enable_zerocopy_send_server": true, 00:10:09.516 "enable_zerocopy_send_client": false, 00:10:09.516 "zerocopy_threshold": 0, 00:10:09.516 "tls_version": 0, 00:10:09.516 "enable_ktls": false 00:10:09.516 } 00:10:09.516 } 00:10:09.516 ] 00:10:09.516 }, 00:10:09.516 { 00:10:09.516 "subsystem": "vmd", 00:10:09.516 "config": [] 00:10:09.516 }, 00:10:09.516 { 00:10:09.516 "subsystem": "accel", 00:10:09.516 "config": [ 00:10:09.516 { 00:10:09.516 "method": "accel_set_options", 00:10:09.516 "params": { 00:10:09.516 "small_cache_size": 128, 00:10:09.516 "large_cache_size": 16, 00:10:09.516 "task_count": 2048, 00:10:09.516 "sequence_count": 2048, 00:10:09.516 "buf_count": 2048 00:10:09.516 } 00:10:09.516 } 00:10:09.516 ] 00:10:09.516 }, 00:10:09.516 { 00:10:09.516 "subsystem": "bdev", 00:10:09.516 "config": [ 00:10:09.516 { 00:10:09.516 "method": "bdev_set_options", 00:10:09.516 "params": { 00:10:09.516 "bdev_io_pool_size": 65535, 00:10:09.516 "bdev_io_cache_size": 256, 00:10:09.516 "bdev_auto_examine": true, 00:10:09.516 "iobuf_small_cache_size": 128, 00:10:09.516 "iobuf_large_cache_size": 16 00:10:09.516 } 00:10:09.516 }, 00:10:09.516 { 00:10:09.516 "method": "bdev_raid_set_options", 00:10:09.516 "params": { 00:10:09.516 "process_window_size_kb": 1024, 00:10:09.516 "process_max_bandwidth_mb_sec": 0 00:10:09.516 } 00:10:09.516 }, 00:10:09.516 { 00:10:09.516 "method": "bdev_iscsi_set_options", 00:10:09.516 "params": { 00:10:09.516 "timeout_sec": 30 00:10:09.516 } 00:10:09.516 }, 00:10:09.516 { 00:10:09.516 "method": "bdev_nvme_set_options", 00:10:09.516 "params": { 00:10:09.516 "action_on_timeout": "none", 00:10:09.516 "timeout_us": 0, 00:10:09.516 "timeout_admin_us": 0, 00:10:09.516 "keep_alive_timeout_ms": 10000, 00:10:09.516 "arbitration_burst": 0, 00:10:09.516 "low_priority_weight": 0, 00:10:09.516 "medium_priority_weight": 0, 00:10:09.516 "high_priority_weight": 0, 00:10:09.516 "nvme_adminq_poll_period_us": 10000, 00:10:09.516 "nvme_ioq_poll_period_us": 0, 00:10:09.516 "io_queue_requests": 0, 00:10:09.516 "delay_cmd_submit": true, 00:10:09.516 "transport_retry_count": 4, 00:10:09.516 "bdev_retry_count": 3, 00:10:09.516 "transport_ack_timeout": 0, 00:10:09.516 "ctrlr_loss_timeout_sec": 0, 00:10:09.516 "reconnect_delay_sec": 0, 00:10:09.516 "fast_io_fail_timeout_sec": 0, 00:10:09.516 "disable_auto_failback": false, 00:10:09.516 "generate_uuids": false, 00:10:09.516 "transport_tos": 0, 00:10:09.516 "nvme_error_stat": false, 00:10:09.516 "rdma_srq_size": 0, 00:10:09.516 "io_path_stat": false, 00:10:09.516 "allow_accel_sequence": false, 00:10:09.516 "rdma_max_cq_size": 0, 00:10:09.516 "rdma_cm_event_timeout_ms": 0, 00:10:09.516 "dhchap_digests": [ 00:10:09.516 "sha256", 00:10:09.516 "sha384", 00:10:09.516 "sha512" 00:10:09.516 ], 00:10:09.516 "dhchap_dhgroups": [ 00:10:09.516 "null", 00:10:09.516 "ffdhe2048", 00:10:09.516 "ffdhe3072", 00:10:09.516 "ffdhe4096", 00:10:09.516 "ffdhe6144", 00:10:09.516 "ffdhe8192" 00:10:09.516 ] 00:10:09.516 } 00:10:09.516 }, 00:10:09.516 { 00:10:09.516 "method": "bdev_nvme_set_hotplug", 00:10:09.516 "params": { 00:10:09.516 "period_us": 100000, 00:10:09.516 "enable": false 00:10:09.516 } 00:10:09.516 }, 00:10:09.516 { 00:10:09.516 "method": "bdev_wait_for_examine" 00:10:09.516 } 00:10:09.516 ] 00:10:09.516 }, 00:10:09.516 { 00:10:09.516 "subsystem": "scsi", 00:10:09.516 "config": null 00:10:09.516 }, 00:10:09.516 { 00:10:09.516 "subsystem": "scheduler", 00:10:09.516 "config": [ 00:10:09.516 { 00:10:09.516 "method": "framework_set_scheduler", 00:10:09.516 "params": { 00:10:09.516 "name": "static" 00:10:09.516 } 00:10:09.516 } 00:10:09.516 ] 00:10:09.516 }, 00:10:09.516 { 00:10:09.516 "subsystem": "vhost_scsi", 00:10:09.516 "config": [] 00:10:09.516 }, 00:10:09.516 { 00:10:09.516 "subsystem": "vhost_blk", 00:10:09.516 "config": [] 00:10:09.516 }, 00:10:09.516 { 00:10:09.516 "subsystem": "ublk", 00:10:09.516 "config": [] 00:10:09.516 }, 00:10:09.516 { 00:10:09.516 "subsystem": "nbd", 00:10:09.516 "config": [] 00:10:09.516 }, 00:10:09.516 { 00:10:09.516 "subsystem": "nvmf", 00:10:09.516 "config": [ 00:10:09.516 { 00:10:09.516 "method": "nvmf_set_config", 00:10:09.516 "params": { 00:10:09.516 "discovery_filter": "match_any", 00:10:09.516 "admin_cmd_passthru": { 00:10:09.516 "identify_ctrlr": false 00:10:09.516 }, 00:10:09.516 "dhchap_digests": [ 00:10:09.516 "sha256", 00:10:09.516 "sha384", 00:10:09.516 "sha512" 00:10:09.516 ], 00:10:09.516 "dhchap_dhgroups": [ 00:10:09.516 "null", 00:10:09.516 "ffdhe2048", 00:10:09.516 "ffdhe3072", 00:10:09.516 "ffdhe4096", 00:10:09.516 "ffdhe6144", 00:10:09.516 "ffdhe8192" 00:10:09.516 ] 00:10:09.516 } 00:10:09.516 }, 00:10:09.517 { 00:10:09.517 "method": "nvmf_set_max_subsystems", 00:10:09.517 "params": { 00:10:09.517 "max_subsystems": 1024 00:10:09.517 } 00:10:09.517 }, 00:10:09.517 { 00:10:09.517 "method": "nvmf_set_crdt", 00:10:09.517 "params": { 00:10:09.517 "crdt1": 0, 00:10:09.517 "crdt2": 0, 00:10:09.517 "crdt3": 0 00:10:09.517 } 00:10:09.517 }, 00:10:09.517 { 00:10:09.517 "method": "nvmf_create_transport", 00:10:09.517 "params": { 00:10:09.517 "trtype": "TCP", 00:10:09.517 "max_queue_depth": 128, 00:10:09.517 "max_io_qpairs_per_ctrlr": 127, 00:10:09.517 "in_capsule_data_size": 4096, 00:10:09.517 "max_io_size": 131072, 00:10:09.517 "io_unit_size": 131072, 00:10:09.517 "max_aq_depth": 128, 00:10:09.517 "num_shared_buffers": 511, 00:10:09.517 "buf_cache_size": 4294967295, 00:10:09.517 "dif_insert_or_strip": false, 00:10:09.517 "zcopy": false, 00:10:09.517 "c2h_success": true, 00:10:09.517 "sock_priority": 0, 00:10:09.517 "abort_timeout_sec": 1, 00:10:09.517 "ack_timeout": 0, 00:10:09.517 "data_wr_pool_size": 0 00:10:09.517 } 00:10:09.517 } 00:10:09.517 ] 00:10:09.517 }, 00:10:09.517 { 00:10:09.517 "subsystem": "iscsi", 00:10:09.517 "config": [ 00:10:09.517 { 00:10:09.517 "method": "iscsi_set_options", 00:10:09.517 "params": { 00:10:09.517 "node_base": "iqn.2016-06.io.spdk", 00:10:09.517 "max_sessions": 128, 00:10:09.517 "max_connections_per_session": 2, 00:10:09.517 "max_queue_depth": 64, 00:10:09.517 "default_time2wait": 2, 00:10:09.517 "default_time2retain": 20, 00:10:09.517 "first_burst_length": 8192, 00:10:09.517 "immediate_data": true, 00:10:09.517 "allow_duplicated_isid": false, 00:10:09.517 "error_recovery_level": 0, 00:10:09.517 "nop_timeout": 60, 00:10:09.517 "nop_in_interval": 30, 00:10:09.517 "disable_chap": false, 00:10:09.517 "require_chap": false, 00:10:09.517 "mutual_chap": false, 00:10:09.517 "chap_group": 0, 00:10:09.517 "max_large_datain_per_connection": 64, 00:10:09.517 "max_r2t_per_connection": 4, 00:10:09.517 "pdu_pool_size": 36864, 00:10:09.517 "immediate_data_pool_size": 16384, 00:10:09.517 "data_out_pool_size": 2048 00:10:09.517 } 00:10:09.517 } 00:10:09.517 ] 00:10:09.517 } 00:10:09.517 ] 00:10:09.517 } 00:10:09.517 10:01:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:09.517 10:01:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58442 00:10:09.517 10:01:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58442 ']' 00:10:09.517 10:01:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58442 00:10:09.517 10:01:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:10:09.517 10:01:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:09.517 10:01:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58442 00:10:09.517 10:01:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:09.517 10:01:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:09.517 10:01:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58442' 00:10:09.517 killing process with pid 58442 00:10:09.517 10:01:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58442 00:10:09.517 10:01:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58442 00:10:12.050 10:01:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58498 00:10:12.050 10:01:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:10:12.050 10:01:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:10:17.321 10:01:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58498 00:10:17.321 10:01:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58498 ']' 00:10:17.321 10:01:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58498 00:10:17.321 10:01:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:10:17.321 10:01:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:17.321 10:01:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58498 00:10:17.321 10:01:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:17.321 killing process with pid 58498 00:10:17.321 10:01:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:17.321 10:01:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58498' 00:10:17.321 10:01:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58498 00:10:17.321 10:01:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58498 00:10:19.853 10:01:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:10:19.853 10:01:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:10:19.853 00:10:19.853 real 0m12.024s 00:10:19.853 user 0m11.154s 00:10:19.853 sys 0m1.302s 00:10:19.853 10:01:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:19.853 10:01:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:19.853 ************************************ 00:10:19.853 END TEST skip_rpc_with_json 00:10:19.853 ************************************ 00:10:19.853 10:01:50 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:10:19.853 10:01:50 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:19.853 10:01:50 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:19.853 10:01:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:19.853 ************************************ 00:10:19.853 START TEST skip_rpc_with_delay 00:10:19.853 ************************************ 00:10:19.853 10:01:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:10:19.853 10:01:50 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:10:19.853 10:01:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:10:19.853 10:01:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:10:19.853 10:01:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:19.853 10:01:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:19.853 10:01:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:19.853 10:01:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:19.853 10:01:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:19.853 10:01:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:19.853 10:01:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:19.853 10:01:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:10:19.853 10:01:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:10:19.853 [2024-12-09 10:01:50.581675] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:10:20.111 10:01:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:10:20.111 10:01:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:20.111 10:01:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:20.111 10:01:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:20.111 00:10:20.111 real 0m0.209s 00:10:20.111 user 0m0.111s 00:10:20.111 sys 0m0.095s 00:10:20.111 10:01:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:20.111 10:01:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:10:20.111 ************************************ 00:10:20.111 END TEST skip_rpc_with_delay 00:10:20.111 ************************************ 00:10:20.111 10:01:50 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:10:20.111 10:01:50 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:10:20.111 10:01:50 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:10:20.111 10:01:50 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:20.111 10:01:50 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:20.111 10:01:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:20.111 ************************************ 00:10:20.111 START TEST exit_on_failed_rpc_init 00:10:20.111 ************************************ 00:10:20.111 10:01:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:10:20.111 10:01:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58637 00:10:20.111 10:01:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58637 00:10:20.111 10:01:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:20.111 10:01:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 58637 ']' 00:10:20.111 10:01:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:20.111 10:01:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:20.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:20.111 10:01:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:20.111 10:01:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:20.111 10:01:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:10:20.111 [2024-12-09 10:01:50.892745] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:10:20.111 [2024-12-09 10:01:50.893546] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58637 ] 00:10:20.370 [2024-12-09 10:01:51.073059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.628 [2024-12-09 10:01:51.227502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.563 10:01:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:21.563 10:01:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:10:21.563 10:01:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:21.563 10:01:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:10:21.563 10:01:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:10:21.563 10:01:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:10:21.563 10:01:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:21.563 10:01:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:21.563 10:01:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:21.563 10:01:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:21.563 10:01:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:21.563 10:01:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:21.563 10:01:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:21.563 10:01:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:10:21.563 10:01:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:10:21.822 [2024-12-09 10:01:52.362345] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:10:21.822 [2024-12-09 10:01:52.362514] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58660 ] 00:10:21.822 [2024-12-09 10:01:52.550191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:22.081 [2024-12-09 10:01:52.730052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:22.081 [2024-12-09 10:01:52.730224] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:10:22.081 [2024-12-09 10:01:52.730254] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:10:22.081 [2024-12-09 10:01:52.730281] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:22.648 10:01:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:10:22.648 10:01:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:22.648 10:01:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:10:22.648 10:01:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:10:22.648 10:01:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:10:22.648 10:01:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:22.648 10:01:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:10:22.648 10:01:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58637 00:10:22.648 10:01:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 58637 ']' 00:10:22.648 10:01:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 58637 00:10:22.648 10:01:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:10:22.648 10:01:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:22.648 10:01:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58637 00:10:22.648 10:01:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:22.648 killing process with pid 58637 00:10:22.648 10:01:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:22.648 10:01:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58637' 00:10:22.648 10:01:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 58637 00:10:22.648 10:01:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 58637 00:10:25.178 00:10:25.178 real 0m4.972s 00:10:25.178 user 0m5.436s 00:10:25.178 sys 0m0.781s 00:10:25.178 10:01:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:25.178 10:01:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:10:25.178 ************************************ 00:10:25.178 END TEST exit_on_failed_rpc_init 00:10:25.178 ************************************ 00:10:25.178 10:01:55 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:10:25.178 00:10:25.178 real 0m25.238s 00:10:25.178 user 0m23.808s 00:10:25.178 sys 0m2.983s 00:10:25.178 10:01:55 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:25.178 ************************************ 00:10:25.178 END TEST skip_rpc 00:10:25.178 10:01:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:25.178 ************************************ 00:10:25.178 10:01:55 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:10:25.178 10:01:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:25.178 10:01:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:25.178 10:01:55 -- common/autotest_common.sh@10 -- # set +x 00:10:25.178 ************************************ 00:10:25.178 START TEST rpc_client 00:10:25.178 ************************************ 00:10:25.178 10:01:55 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:10:25.178 * Looking for test storage... 00:10:25.178 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:10:25.178 10:01:55 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:25.178 10:01:55 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:10:25.178 10:01:55 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:25.178 10:01:55 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:25.178 10:01:55 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:25.178 10:01:55 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:25.178 10:01:55 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:25.178 10:01:55 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:10:25.178 10:01:55 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:10:25.178 10:01:55 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:10:25.178 10:01:55 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:10:25.178 10:01:55 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:10:25.178 10:01:55 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:10:25.178 10:01:55 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:10:25.178 10:01:55 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:25.178 10:01:55 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:10:25.178 10:01:55 rpc_client -- scripts/common.sh@345 -- # : 1 00:10:25.178 10:01:55 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:25.178 10:01:55 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:25.178 10:01:55 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:10:25.178 10:01:55 rpc_client -- scripts/common.sh@353 -- # local d=1 00:10:25.178 10:01:55 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:25.178 10:01:55 rpc_client -- scripts/common.sh@355 -- # echo 1 00:10:25.178 10:01:55 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:10:25.178 10:01:55 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:10:25.178 10:01:55 rpc_client -- scripts/common.sh@353 -- # local d=2 00:10:25.178 10:01:55 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:25.178 10:01:55 rpc_client -- scripts/common.sh@355 -- # echo 2 00:10:25.178 10:01:55 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:10:25.178 10:01:55 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:25.178 10:01:55 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:25.178 10:01:55 rpc_client -- scripts/common.sh@368 -- # return 0 00:10:25.178 10:01:55 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:25.178 10:01:55 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:25.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.178 --rc genhtml_branch_coverage=1 00:10:25.178 --rc genhtml_function_coverage=1 00:10:25.178 --rc genhtml_legend=1 00:10:25.178 --rc geninfo_all_blocks=1 00:10:25.178 --rc geninfo_unexecuted_blocks=1 00:10:25.178 00:10:25.178 ' 00:10:25.178 10:01:55 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:25.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.178 --rc genhtml_branch_coverage=1 00:10:25.178 --rc genhtml_function_coverage=1 00:10:25.178 --rc genhtml_legend=1 00:10:25.178 --rc geninfo_all_blocks=1 00:10:25.178 --rc geninfo_unexecuted_blocks=1 00:10:25.178 00:10:25.178 ' 00:10:25.178 10:01:55 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:25.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.178 --rc genhtml_branch_coverage=1 00:10:25.178 --rc genhtml_function_coverage=1 00:10:25.178 --rc genhtml_legend=1 00:10:25.178 --rc geninfo_all_blocks=1 00:10:25.178 --rc geninfo_unexecuted_blocks=1 00:10:25.178 00:10:25.178 ' 00:10:25.178 10:01:55 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:25.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.178 --rc genhtml_branch_coverage=1 00:10:25.178 --rc genhtml_function_coverage=1 00:10:25.178 --rc genhtml_legend=1 00:10:25.178 --rc geninfo_all_blocks=1 00:10:25.178 --rc geninfo_unexecuted_blocks=1 00:10:25.178 00:10:25.178 ' 00:10:25.178 10:01:55 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:10:25.437 OK 00:10:25.437 10:01:56 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:10:25.437 00:10:25.437 real 0m0.247s 00:10:25.437 user 0m0.148s 00:10:25.437 sys 0m0.110s 00:10:25.437 10:01:56 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:25.437 10:01:56 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:10:25.437 ************************************ 00:10:25.437 END TEST rpc_client 00:10:25.437 ************************************ 00:10:25.437 10:01:56 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:10:25.437 10:01:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:25.437 10:01:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:25.437 10:01:56 -- common/autotest_common.sh@10 -- # set +x 00:10:25.437 ************************************ 00:10:25.437 START TEST json_config 00:10:25.437 ************************************ 00:10:25.437 10:01:56 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:10:25.437 10:01:56 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:25.437 10:01:56 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:10:25.437 10:01:56 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:25.437 10:01:56 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:25.437 10:01:56 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:25.437 10:01:56 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:25.437 10:01:56 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:25.437 10:01:56 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:10:25.437 10:01:56 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:10:25.437 10:01:56 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:10:25.437 10:01:56 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:10:25.437 10:01:56 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:10:25.437 10:01:56 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:10:25.437 10:01:56 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:10:25.437 10:01:56 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:25.437 10:01:56 json_config -- scripts/common.sh@344 -- # case "$op" in 00:10:25.437 10:01:56 json_config -- scripts/common.sh@345 -- # : 1 00:10:25.437 10:01:56 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:25.437 10:01:56 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:25.437 10:01:56 json_config -- scripts/common.sh@365 -- # decimal 1 00:10:25.437 10:01:56 json_config -- scripts/common.sh@353 -- # local d=1 00:10:25.437 10:01:56 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:25.437 10:01:56 json_config -- scripts/common.sh@355 -- # echo 1 00:10:25.437 10:01:56 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:10:25.437 10:01:56 json_config -- scripts/common.sh@366 -- # decimal 2 00:10:25.437 10:01:56 json_config -- scripts/common.sh@353 -- # local d=2 00:10:25.437 10:01:56 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:25.437 10:01:56 json_config -- scripts/common.sh@355 -- # echo 2 00:10:25.437 10:01:56 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:10:25.437 10:01:56 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:25.437 10:01:56 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:25.437 10:01:56 json_config -- scripts/common.sh@368 -- # return 0 00:10:25.437 10:01:56 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:25.437 10:01:56 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:25.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.437 --rc genhtml_branch_coverage=1 00:10:25.437 --rc genhtml_function_coverage=1 00:10:25.437 --rc genhtml_legend=1 00:10:25.437 --rc geninfo_all_blocks=1 00:10:25.437 --rc geninfo_unexecuted_blocks=1 00:10:25.437 00:10:25.437 ' 00:10:25.437 10:01:56 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:25.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.437 --rc genhtml_branch_coverage=1 00:10:25.437 --rc genhtml_function_coverage=1 00:10:25.437 --rc genhtml_legend=1 00:10:25.437 --rc geninfo_all_blocks=1 00:10:25.437 --rc geninfo_unexecuted_blocks=1 00:10:25.437 00:10:25.437 ' 00:10:25.696 10:01:56 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:25.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.696 --rc genhtml_branch_coverage=1 00:10:25.696 --rc genhtml_function_coverage=1 00:10:25.696 --rc genhtml_legend=1 00:10:25.696 --rc geninfo_all_blocks=1 00:10:25.696 --rc geninfo_unexecuted_blocks=1 00:10:25.696 00:10:25.696 ' 00:10:25.696 10:01:56 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:25.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.696 --rc genhtml_branch_coverage=1 00:10:25.696 --rc genhtml_function_coverage=1 00:10:25.696 --rc genhtml_legend=1 00:10:25.696 --rc geninfo_all_blocks=1 00:10:25.696 --rc geninfo_unexecuted_blocks=1 00:10:25.696 00:10:25.696 ' 00:10:25.696 10:01:56 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:25.696 10:01:56 json_config -- nvmf/common.sh@7 -- # uname -s 00:10:25.696 10:01:56 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:25.696 10:01:56 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:25.696 10:01:56 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:25.696 10:01:56 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:25.696 10:01:56 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:25.696 10:01:56 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:25.696 10:01:56 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:25.696 10:01:56 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:25.696 10:01:56 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:25.696 10:01:56 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:25.696 10:01:56 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9480b59e-3d5c-4268-b741-40b3738e039b 00:10:25.696 10:01:56 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=9480b59e-3d5c-4268-b741-40b3738e039b 00:10:25.696 10:01:56 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:25.696 10:01:56 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:25.696 10:01:56 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:10:25.696 10:01:56 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:25.696 10:01:56 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:25.696 10:01:56 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:10:25.697 10:01:56 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:25.697 10:01:56 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:25.697 10:01:56 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:25.697 10:01:56 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.697 10:01:56 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.697 10:01:56 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.697 10:01:56 json_config -- paths/export.sh@5 -- # export PATH 00:10:25.697 10:01:56 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.697 10:01:56 json_config -- nvmf/common.sh@51 -- # : 0 00:10:25.697 10:01:56 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:25.697 10:01:56 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:25.697 10:01:56 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:25.697 10:01:56 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:25.697 10:01:56 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:25.697 10:01:56 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:25.697 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:25.697 10:01:56 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:25.697 10:01:56 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:25.697 10:01:56 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:25.697 10:01:56 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:10:25.697 10:01:56 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:10:25.697 10:01:56 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:10:25.697 10:01:56 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:10:25.697 10:01:56 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:10:25.697 10:01:56 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:10:25.697 WARNING: No tests are enabled so not running JSON configuration tests 00:10:25.697 10:01:56 json_config -- json_config/json_config.sh@28 -- # exit 0 00:10:25.697 00:10:25.697 real 0m0.191s 00:10:25.697 user 0m0.119s 00:10:25.697 sys 0m0.071s 00:10:25.697 10:01:56 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:25.697 ************************************ 00:10:25.697 END TEST json_config 00:10:25.697 ************************************ 00:10:25.697 10:01:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:25.697 10:01:56 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:10:25.697 10:01:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:25.697 10:01:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:25.697 10:01:56 -- common/autotest_common.sh@10 -- # set +x 00:10:25.697 ************************************ 00:10:25.697 START TEST json_config_extra_key 00:10:25.697 ************************************ 00:10:25.697 10:01:56 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:10:25.697 10:01:56 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:25.697 10:01:56 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:10:25.697 10:01:56 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:25.697 10:01:56 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:25.697 10:01:56 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:25.697 10:01:56 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:25.697 10:01:56 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:25.697 10:01:56 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:10:25.697 10:01:56 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:10:25.697 10:01:56 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:10:25.697 10:01:56 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:10:25.697 10:01:56 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:10:25.697 10:01:56 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:10:25.697 10:01:56 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:10:25.697 10:01:56 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:25.697 10:01:56 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:10:25.697 10:01:56 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:10:25.697 10:01:56 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:25.697 10:01:56 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:25.697 10:01:56 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:10:25.697 10:01:56 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:10:25.697 10:01:56 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:25.697 10:01:56 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:10:25.697 10:01:56 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:10:25.697 10:01:56 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:10:25.697 10:01:56 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:10:25.697 10:01:56 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:25.697 10:01:56 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:10:25.697 10:01:56 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:10:25.697 10:01:56 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:25.697 10:01:56 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:25.697 10:01:56 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:10:25.697 10:01:56 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:25.697 10:01:56 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:25.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.697 --rc genhtml_branch_coverage=1 00:10:25.697 --rc genhtml_function_coverage=1 00:10:25.697 --rc genhtml_legend=1 00:10:25.697 --rc geninfo_all_blocks=1 00:10:25.697 --rc geninfo_unexecuted_blocks=1 00:10:25.697 00:10:25.697 ' 00:10:25.697 10:01:56 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:25.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.697 --rc genhtml_branch_coverage=1 00:10:25.697 --rc genhtml_function_coverage=1 00:10:25.697 --rc genhtml_legend=1 00:10:25.697 --rc geninfo_all_blocks=1 00:10:25.697 --rc geninfo_unexecuted_blocks=1 00:10:25.697 00:10:25.697 ' 00:10:25.697 10:01:56 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:25.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.697 --rc genhtml_branch_coverage=1 00:10:25.697 --rc genhtml_function_coverage=1 00:10:25.697 --rc genhtml_legend=1 00:10:25.697 --rc geninfo_all_blocks=1 00:10:25.697 --rc geninfo_unexecuted_blocks=1 00:10:25.697 00:10:25.697 ' 00:10:25.697 10:01:56 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:25.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.697 --rc genhtml_branch_coverage=1 00:10:25.697 --rc genhtml_function_coverage=1 00:10:25.697 --rc genhtml_legend=1 00:10:25.697 --rc geninfo_all_blocks=1 00:10:25.697 --rc geninfo_unexecuted_blocks=1 00:10:25.697 00:10:25.697 ' 00:10:25.697 10:01:56 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:25.697 10:01:56 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:10:25.956 10:01:56 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:25.956 10:01:56 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:25.956 10:01:56 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:25.956 10:01:56 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:25.956 10:01:56 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:25.956 10:01:56 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:25.956 10:01:56 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:25.956 10:01:56 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:25.956 10:01:56 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:25.956 10:01:56 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:25.956 10:01:56 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9480b59e-3d5c-4268-b741-40b3738e039b 00:10:25.956 10:01:56 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=9480b59e-3d5c-4268-b741-40b3738e039b 00:10:25.956 10:01:56 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:25.956 10:01:56 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:25.956 10:01:56 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:10:25.956 10:01:56 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:25.956 10:01:56 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:25.956 10:01:56 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:10:25.956 10:01:56 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:25.956 10:01:56 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:25.956 10:01:56 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:25.956 10:01:56 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.956 10:01:56 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.956 10:01:56 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.956 10:01:56 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:10:25.956 10:01:56 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.956 10:01:56 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:10:25.956 10:01:56 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:25.956 10:01:56 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:25.956 10:01:56 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:25.956 10:01:56 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:25.956 10:01:56 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:25.956 10:01:56 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:25.956 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:25.956 10:01:56 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:25.956 10:01:56 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:25.956 10:01:56 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:25.956 10:01:56 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:10:25.956 10:01:56 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:10:25.957 10:01:56 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:10:25.957 10:01:56 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:10:25.957 10:01:56 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:10:25.957 10:01:56 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:10:25.957 10:01:56 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:10:25.957 10:01:56 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:10:25.957 10:01:56 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:10:25.957 10:01:56 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:10:25.957 10:01:56 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:10:25.957 INFO: launching applications... 00:10:25.957 10:01:56 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:10:25.957 10:01:56 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:10:25.957 10:01:56 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:10:25.957 10:01:56 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:10:25.957 10:01:56 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:10:25.957 10:01:56 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:10:25.957 10:01:56 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:25.957 10:01:56 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:25.957 10:01:56 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58876 00:10:25.957 10:01:56 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:10:25.957 Waiting for target to run... 00:10:25.957 10:01:56 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:10:25.957 10:01:56 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58876 /var/tmp/spdk_tgt.sock 00:10:25.957 10:01:56 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 58876 ']' 00:10:25.957 10:01:56 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:10:25.957 10:01:56 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:25.957 10:01:56 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:10:25.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:10:25.957 10:01:56 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:25.957 10:01:56 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:10:25.957 [2024-12-09 10:01:56.643188] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:10:25.957 [2024-12-09 10:01:56.643609] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58876 ] 00:10:26.522 [2024-12-09 10:01:57.229373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.780 [2024-12-09 10:01:57.395071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.346 10:01:58 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:27.346 00:10:27.346 INFO: shutting down applications... 00:10:27.346 10:01:58 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:10:27.346 10:01:58 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:10:27.346 10:01:58 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:10:27.346 10:01:58 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:10:27.346 10:01:58 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:10:27.346 10:01:58 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:10:27.346 10:01:58 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58876 ]] 00:10:27.346 10:01:58 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58876 00:10:27.346 10:01:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:10:27.346 10:01:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:27.346 10:01:58 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58876 00:10:27.346 10:01:58 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:27.979 10:01:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:27.979 10:01:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:27.979 10:01:58 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58876 00:10:27.979 10:01:58 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:28.546 10:01:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:28.546 10:01:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:28.546 10:01:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58876 00:10:28.546 10:01:59 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:29.114 10:01:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:29.114 10:01:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:29.114 10:01:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58876 00:10:29.114 10:01:59 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:29.372 10:02:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:29.372 10:02:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:29.372 10:02:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58876 00:10:29.372 10:02:00 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:29.939 10:02:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:29.939 10:02:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:29.939 10:02:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58876 00:10:29.939 10:02:00 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:30.506 10:02:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:30.506 10:02:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:30.506 10:02:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58876 00:10:30.506 10:02:01 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:10:30.506 10:02:01 json_config_extra_key -- json_config/common.sh@43 -- # break 00:10:30.506 SPDK target shutdown done 00:10:30.506 Success 00:10:30.506 10:02:01 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:10:30.506 10:02:01 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:10:30.506 10:02:01 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:10:30.506 00:10:30.506 real 0m4.845s 00:10:30.506 user 0m4.531s 00:10:30.506 sys 0m0.775s 00:10:30.506 10:02:01 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:30.506 10:02:01 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:10:30.506 ************************************ 00:10:30.506 END TEST json_config_extra_key 00:10:30.506 ************************************ 00:10:30.506 10:02:01 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:10:30.506 10:02:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:30.506 10:02:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:30.506 10:02:01 -- common/autotest_common.sh@10 -- # set +x 00:10:30.506 ************************************ 00:10:30.506 START TEST alias_rpc 00:10:30.506 ************************************ 00:10:30.506 10:02:01 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:10:30.506 * Looking for test storage... 00:10:30.765 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:10:30.765 10:02:01 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:30.765 10:02:01 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:10:30.765 10:02:01 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:30.765 10:02:01 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:30.765 10:02:01 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:30.765 10:02:01 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:30.765 10:02:01 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:30.765 10:02:01 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:30.765 10:02:01 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:30.765 10:02:01 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:30.765 10:02:01 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:30.765 10:02:01 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:30.765 10:02:01 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:30.765 10:02:01 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:30.765 10:02:01 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:30.765 10:02:01 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:30.765 10:02:01 alias_rpc -- scripts/common.sh@345 -- # : 1 00:10:30.765 10:02:01 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:30.765 10:02:01 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:30.765 10:02:01 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:30.765 10:02:01 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:10:30.765 10:02:01 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:30.765 10:02:01 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:10:30.765 10:02:01 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:30.765 10:02:01 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:30.765 10:02:01 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:10:30.765 10:02:01 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:30.765 10:02:01 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:10:30.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:30.765 10:02:01 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:30.765 10:02:01 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:30.765 10:02:01 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:30.765 10:02:01 alias_rpc -- scripts/common.sh@368 -- # return 0 00:10:30.765 10:02:01 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:30.765 10:02:01 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:30.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.765 --rc genhtml_branch_coverage=1 00:10:30.765 --rc genhtml_function_coverage=1 00:10:30.765 --rc genhtml_legend=1 00:10:30.765 --rc geninfo_all_blocks=1 00:10:30.765 --rc geninfo_unexecuted_blocks=1 00:10:30.765 00:10:30.765 ' 00:10:30.765 10:02:01 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:30.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.765 --rc genhtml_branch_coverage=1 00:10:30.765 --rc genhtml_function_coverage=1 00:10:30.765 --rc genhtml_legend=1 00:10:30.765 --rc geninfo_all_blocks=1 00:10:30.765 --rc geninfo_unexecuted_blocks=1 00:10:30.765 00:10:30.765 ' 00:10:30.765 10:02:01 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:30.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.765 --rc genhtml_branch_coverage=1 00:10:30.765 --rc genhtml_function_coverage=1 00:10:30.765 --rc genhtml_legend=1 00:10:30.765 --rc geninfo_all_blocks=1 00:10:30.765 --rc geninfo_unexecuted_blocks=1 00:10:30.765 00:10:30.765 ' 00:10:30.765 10:02:01 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:30.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.765 --rc genhtml_branch_coverage=1 00:10:30.765 --rc genhtml_function_coverage=1 00:10:30.765 --rc genhtml_legend=1 00:10:30.765 --rc geninfo_all_blocks=1 00:10:30.765 --rc geninfo_unexecuted_blocks=1 00:10:30.765 00:10:30.765 ' 00:10:30.765 10:02:01 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:30.765 10:02:01 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58982 00:10:30.765 10:02:01 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58982 00:10:30.765 10:02:01 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 58982 ']' 00:10:30.765 10:02:01 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:30.765 10:02:01 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:30.765 10:02:01 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:30.765 10:02:01 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:30.765 10:02:01 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:30.765 10:02:01 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:30.765 [2024-12-09 10:02:01.532053] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:10:30.765 [2024-12-09 10:02:01.532386] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58982 ] 00:10:31.024 [2024-12-09 10:02:01.714694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:31.283 [2024-12-09 10:02:01.889944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.219 10:02:02 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:32.219 10:02:02 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:32.219 10:02:02 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:10:32.495 10:02:03 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58982 00:10:32.495 10:02:03 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 58982 ']' 00:10:32.495 10:02:03 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 58982 00:10:32.495 10:02:03 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:10:32.495 10:02:03 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:32.495 10:02:03 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58982 00:10:32.495 10:02:03 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:32.495 10:02:03 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:32.495 killing process with pid 58982 00:10:32.495 10:02:03 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58982' 00:10:32.495 10:02:03 alias_rpc -- common/autotest_common.sh@973 -- # kill 58982 00:10:32.495 10:02:03 alias_rpc -- common/autotest_common.sh@978 -- # wait 58982 00:10:35.805 ************************************ 00:10:35.805 END TEST alias_rpc 00:10:35.805 ************************************ 00:10:35.805 00:10:35.805 real 0m4.629s 00:10:35.805 user 0m4.713s 00:10:35.805 sys 0m0.777s 00:10:35.805 10:02:05 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:35.805 10:02:05 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.805 10:02:05 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:10:35.805 10:02:05 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:10:35.806 10:02:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:35.806 10:02:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:35.806 10:02:05 -- common/autotest_common.sh@10 -- # set +x 00:10:35.806 ************************************ 00:10:35.806 START TEST spdkcli_tcp 00:10:35.806 ************************************ 00:10:35.806 10:02:05 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:10:35.806 * Looking for test storage... 00:10:35.806 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:10:35.806 10:02:05 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:35.806 10:02:05 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:10:35.806 10:02:05 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:35.806 10:02:06 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:35.806 10:02:06 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:35.806 10:02:06 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:35.806 10:02:06 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:35.806 10:02:06 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:10:35.806 10:02:06 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:10:35.806 10:02:06 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:10:35.806 10:02:06 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:10:35.806 10:02:06 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:10:35.806 10:02:06 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:10:35.806 10:02:06 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:10:35.806 10:02:06 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:35.806 10:02:06 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:10:35.806 10:02:06 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:10:35.806 10:02:06 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:35.806 10:02:06 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:35.806 10:02:06 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:10:35.806 10:02:06 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:10:35.806 10:02:06 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:35.806 10:02:06 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:10:35.806 10:02:06 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:10:35.806 10:02:06 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:10:35.806 10:02:06 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:10:35.806 10:02:06 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:35.806 10:02:06 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:10:35.806 10:02:06 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:10:35.806 10:02:06 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:35.806 10:02:06 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:35.806 10:02:06 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:10:35.806 10:02:06 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:35.806 10:02:06 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:35.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.806 --rc genhtml_branch_coverage=1 00:10:35.806 --rc genhtml_function_coverage=1 00:10:35.806 --rc genhtml_legend=1 00:10:35.806 --rc geninfo_all_blocks=1 00:10:35.806 --rc geninfo_unexecuted_blocks=1 00:10:35.806 00:10:35.806 ' 00:10:35.806 10:02:06 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:35.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.806 --rc genhtml_branch_coverage=1 00:10:35.806 --rc genhtml_function_coverage=1 00:10:35.806 --rc genhtml_legend=1 00:10:35.806 --rc geninfo_all_blocks=1 00:10:35.806 --rc geninfo_unexecuted_blocks=1 00:10:35.806 00:10:35.806 ' 00:10:35.806 10:02:06 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:35.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.806 --rc genhtml_branch_coverage=1 00:10:35.806 --rc genhtml_function_coverage=1 00:10:35.806 --rc genhtml_legend=1 00:10:35.806 --rc geninfo_all_blocks=1 00:10:35.806 --rc geninfo_unexecuted_blocks=1 00:10:35.806 00:10:35.806 ' 00:10:35.806 10:02:06 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:35.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.806 --rc genhtml_branch_coverage=1 00:10:35.806 --rc genhtml_function_coverage=1 00:10:35.806 --rc genhtml_legend=1 00:10:35.806 --rc geninfo_all_blocks=1 00:10:35.806 --rc geninfo_unexecuted_blocks=1 00:10:35.806 00:10:35.806 ' 00:10:35.806 10:02:06 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:10:35.806 10:02:06 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:10:35.806 10:02:06 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:10:35.806 10:02:06 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:10:35.806 10:02:06 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:10:35.806 10:02:06 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:35.806 10:02:06 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:10:35.806 10:02:06 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:35.806 10:02:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:35.806 10:02:06 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59100 00:10:35.806 10:02:06 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:10:35.806 10:02:06 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59100 00:10:35.806 10:02:06 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 59100 ']' 00:10:35.806 10:02:06 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:35.806 10:02:06 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:35.806 10:02:06 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:35.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:35.806 10:02:06 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:35.806 10:02:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:35.806 [2024-12-09 10:02:06.197766] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:10:35.806 [2024-12-09 10:02:06.198181] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59100 ] 00:10:35.806 [2024-12-09 10:02:06.379191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:35.806 [2024-12-09 10:02:06.575233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.806 [2024-12-09 10:02:06.575240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:37.182 10:02:07 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:37.182 10:02:07 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:10:37.182 10:02:07 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59117 00:10:37.182 10:02:07 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:10:37.182 10:02:07 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:10:37.182 [ 00:10:37.182 "bdev_malloc_delete", 00:10:37.182 "bdev_malloc_create", 00:10:37.182 "bdev_null_resize", 00:10:37.182 "bdev_null_delete", 00:10:37.182 "bdev_null_create", 00:10:37.182 "bdev_nvme_cuse_unregister", 00:10:37.182 "bdev_nvme_cuse_register", 00:10:37.182 "bdev_opal_new_user", 00:10:37.182 "bdev_opal_set_lock_state", 00:10:37.182 "bdev_opal_delete", 00:10:37.182 "bdev_opal_get_info", 00:10:37.182 "bdev_opal_create", 00:10:37.182 "bdev_nvme_opal_revert", 00:10:37.182 "bdev_nvme_opal_init", 00:10:37.182 "bdev_nvme_send_cmd", 00:10:37.182 "bdev_nvme_set_keys", 00:10:37.182 "bdev_nvme_get_path_iostat", 00:10:37.182 "bdev_nvme_get_mdns_discovery_info", 00:10:37.182 "bdev_nvme_stop_mdns_discovery", 00:10:37.182 "bdev_nvme_start_mdns_discovery", 00:10:37.182 "bdev_nvme_set_multipath_policy", 00:10:37.182 "bdev_nvme_set_preferred_path", 00:10:37.182 "bdev_nvme_get_io_paths", 00:10:37.182 "bdev_nvme_remove_error_injection", 00:10:37.182 "bdev_nvme_add_error_injection", 00:10:37.182 "bdev_nvme_get_discovery_info", 00:10:37.182 "bdev_nvme_stop_discovery", 00:10:37.182 "bdev_nvme_start_discovery", 00:10:37.182 "bdev_nvme_get_controller_health_info", 00:10:37.182 "bdev_nvme_disable_controller", 00:10:37.182 "bdev_nvme_enable_controller", 00:10:37.182 "bdev_nvme_reset_controller", 00:10:37.182 "bdev_nvme_get_transport_statistics", 00:10:37.182 "bdev_nvme_apply_firmware", 00:10:37.182 "bdev_nvme_detach_controller", 00:10:37.182 "bdev_nvme_get_controllers", 00:10:37.182 "bdev_nvme_attach_controller", 00:10:37.182 "bdev_nvme_set_hotplug", 00:10:37.182 "bdev_nvme_set_options", 00:10:37.182 "bdev_passthru_delete", 00:10:37.182 "bdev_passthru_create", 00:10:37.182 "bdev_lvol_set_parent_bdev", 00:10:37.182 "bdev_lvol_set_parent", 00:10:37.182 "bdev_lvol_check_shallow_copy", 00:10:37.182 "bdev_lvol_start_shallow_copy", 00:10:37.182 "bdev_lvol_grow_lvstore", 00:10:37.182 "bdev_lvol_get_lvols", 00:10:37.182 "bdev_lvol_get_lvstores", 00:10:37.182 "bdev_lvol_delete", 00:10:37.182 "bdev_lvol_set_read_only", 00:10:37.182 "bdev_lvol_resize", 00:10:37.182 "bdev_lvol_decouple_parent", 00:10:37.182 "bdev_lvol_inflate", 00:10:37.182 "bdev_lvol_rename", 00:10:37.182 "bdev_lvol_clone_bdev", 00:10:37.182 "bdev_lvol_clone", 00:10:37.182 "bdev_lvol_snapshot", 00:10:37.182 "bdev_lvol_create", 00:10:37.182 "bdev_lvol_delete_lvstore", 00:10:37.182 "bdev_lvol_rename_lvstore", 00:10:37.182 "bdev_lvol_create_lvstore", 00:10:37.182 "bdev_raid_set_options", 00:10:37.182 "bdev_raid_remove_base_bdev", 00:10:37.182 "bdev_raid_add_base_bdev", 00:10:37.182 "bdev_raid_delete", 00:10:37.182 "bdev_raid_create", 00:10:37.182 "bdev_raid_get_bdevs", 00:10:37.182 "bdev_error_inject_error", 00:10:37.182 "bdev_error_delete", 00:10:37.182 "bdev_error_create", 00:10:37.182 "bdev_split_delete", 00:10:37.182 "bdev_split_create", 00:10:37.182 "bdev_delay_delete", 00:10:37.182 "bdev_delay_create", 00:10:37.182 "bdev_delay_update_latency", 00:10:37.182 "bdev_zone_block_delete", 00:10:37.182 "bdev_zone_block_create", 00:10:37.182 "blobfs_create", 00:10:37.182 "blobfs_detect", 00:10:37.182 "blobfs_set_cache_size", 00:10:37.182 "bdev_xnvme_delete", 00:10:37.182 "bdev_xnvme_create", 00:10:37.182 "bdev_aio_delete", 00:10:37.182 "bdev_aio_rescan", 00:10:37.182 "bdev_aio_create", 00:10:37.182 "bdev_ftl_set_property", 00:10:37.182 "bdev_ftl_get_properties", 00:10:37.182 "bdev_ftl_get_stats", 00:10:37.182 "bdev_ftl_unmap", 00:10:37.182 "bdev_ftl_unload", 00:10:37.182 "bdev_ftl_delete", 00:10:37.182 "bdev_ftl_load", 00:10:37.182 "bdev_ftl_create", 00:10:37.183 "bdev_virtio_attach_controller", 00:10:37.183 "bdev_virtio_scsi_get_devices", 00:10:37.183 "bdev_virtio_detach_controller", 00:10:37.183 "bdev_virtio_blk_set_hotplug", 00:10:37.183 "bdev_iscsi_delete", 00:10:37.183 "bdev_iscsi_create", 00:10:37.183 "bdev_iscsi_set_options", 00:10:37.183 "accel_error_inject_error", 00:10:37.183 "ioat_scan_accel_module", 00:10:37.183 "dsa_scan_accel_module", 00:10:37.183 "iaa_scan_accel_module", 00:10:37.183 "keyring_file_remove_key", 00:10:37.183 "keyring_file_add_key", 00:10:37.183 "keyring_linux_set_options", 00:10:37.183 "fsdev_aio_delete", 00:10:37.183 "fsdev_aio_create", 00:10:37.183 "iscsi_get_histogram", 00:10:37.183 "iscsi_enable_histogram", 00:10:37.183 "iscsi_set_options", 00:10:37.183 "iscsi_get_auth_groups", 00:10:37.183 "iscsi_auth_group_remove_secret", 00:10:37.183 "iscsi_auth_group_add_secret", 00:10:37.183 "iscsi_delete_auth_group", 00:10:37.183 "iscsi_create_auth_group", 00:10:37.183 "iscsi_set_discovery_auth", 00:10:37.183 "iscsi_get_options", 00:10:37.183 "iscsi_target_node_request_logout", 00:10:37.183 "iscsi_target_node_set_redirect", 00:10:37.183 "iscsi_target_node_set_auth", 00:10:37.183 "iscsi_target_node_add_lun", 00:10:37.183 "iscsi_get_stats", 00:10:37.183 "iscsi_get_connections", 00:10:37.183 "iscsi_portal_group_set_auth", 00:10:37.183 "iscsi_start_portal_group", 00:10:37.183 "iscsi_delete_portal_group", 00:10:37.183 "iscsi_create_portal_group", 00:10:37.183 "iscsi_get_portal_groups", 00:10:37.183 "iscsi_delete_target_node", 00:10:37.183 "iscsi_target_node_remove_pg_ig_maps", 00:10:37.183 "iscsi_target_node_add_pg_ig_maps", 00:10:37.183 "iscsi_create_target_node", 00:10:37.183 "iscsi_get_target_nodes", 00:10:37.183 "iscsi_delete_initiator_group", 00:10:37.183 "iscsi_initiator_group_remove_initiators", 00:10:37.183 "iscsi_initiator_group_add_initiators", 00:10:37.183 "iscsi_create_initiator_group", 00:10:37.183 "iscsi_get_initiator_groups", 00:10:37.183 "nvmf_set_crdt", 00:10:37.183 "nvmf_set_config", 00:10:37.183 "nvmf_set_max_subsystems", 00:10:37.183 "nvmf_stop_mdns_prr", 00:10:37.183 "nvmf_publish_mdns_prr", 00:10:37.183 "nvmf_subsystem_get_listeners", 00:10:37.183 "nvmf_subsystem_get_qpairs", 00:10:37.183 "nvmf_subsystem_get_controllers", 00:10:37.183 "nvmf_get_stats", 00:10:37.183 "nvmf_get_transports", 00:10:37.183 "nvmf_create_transport", 00:10:37.183 "nvmf_get_targets", 00:10:37.183 "nvmf_delete_target", 00:10:37.183 "nvmf_create_target", 00:10:37.183 "nvmf_subsystem_allow_any_host", 00:10:37.183 "nvmf_subsystem_set_keys", 00:10:37.183 "nvmf_subsystem_remove_host", 00:10:37.183 "nvmf_subsystem_add_host", 00:10:37.183 "nvmf_ns_remove_host", 00:10:37.183 "nvmf_ns_add_host", 00:10:37.183 "nvmf_subsystem_remove_ns", 00:10:37.183 "nvmf_subsystem_set_ns_ana_group", 00:10:37.183 "nvmf_subsystem_add_ns", 00:10:37.183 "nvmf_subsystem_listener_set_ana_state", 00:10:37.183 "nvmf_discovery_get_referrals", 00:10:37.183 "nvmf_discovery_remove_referral", 00:10:37.183 "nvmf_discovery_add_referral", 00:10:37.183 "nvmf_subsystem_remove_listener", 00:10:37.183 "nvmf_subsystem_add_listener", 00:10:37.183 "nvmf_delete_subsystem", 00:10:37.183 "nvmf_create_subsystem", 00:10:37.183 "nvmf_get_subsystems", 00:10:37.183 "env_dpdk_get_mem_stats", 00:10:37.183 "nbd_get_disks", 00:10:37.183 "nbd_stop_disk", 00:10:37.183 "nbd_start_disk", 00:10:37.183 "ublk_recover_disk", 00:10:37.183 "ublk_get_disks", 00:10:37.183 "ublk_stop_disk", 00:10:37.183 "ublk_start_disk", 00:10:37.183 "ublk_destroy_target", 00:10:37.183 "ublk_create_target", 00:10:37.183 "virtio_blk_create_transport", 00:10:37.183 "virtio_blk_get_transports", 00:10:37.183 "vhost_controller_set_coalescing", 00:10:37.183 "vhost_get_controllers", 00:10:37.183 "vhost_delete_controller", 00:10:37.183 "vhost_create_blk_controller", 00:10:37.183 "vhost_scsi_controller_remove_target", 00:10:37.183 "vhost_scsi_controller_add_target", 00:10:37.183 "vhost_start_scsi_controller", 00:10:37.183 "vhost_create_scsi_controller", 00:10:37.183 "thread_set_cpumask", 00:10:37.183 "scheduler_set_options", 00:10:37.183 "framework_get_governor", 00:10:37.183 "framework_get_scheduler", 00:10:37.183 "framework_set_scheduler", 00:10:37.183 "framework_get_reactors", 00:10:37.183 "thread_get_io_channels", 00:10:37.183 "thread_get_pollers", 00:10:37.183 "thread_get_stats", 00:10:37.183 "framework_monitor_context_switch", 00:10:37.183 "spdk_kill_instance", 00:10:37.183 "log_enable_timestamps", 00:10:37.183 "log_get_flags", 00:10:37.183 "log_clear_flag", 00:10:37.183 "log_set_flag", 00:10:37.183 "log_get_level", 00:10:37.183 "log_set_level", 00:10:37.183 "log_get_print_level", 00:10:37.183 "log_set_print_level", 00:10:37.183 "framework_enable_cpumask_locks", 00:10:37.183 "framework_disable_cpumask_locks", 00:10:37.183 "framework_wait_init", 00:10:37.183 "framework_start_init", 00:10:37.183 "scsi_get_devices", 00:10:37.183 "bdev_get_histogram", 00:10:37.183 "bdev_enable_histogram", 00:10:37.183 "bdev_set_qos_limit", 00:10:37.183 "bdev_set_qd_sampling_period", 00:10:37.183 "bdev_get_bdevs", 00:10:37.183 "bdev_reset_iostat", 00:10:37.183 "bdev_get_iostat", 00:10:37.183 "bdev_examine", 00:10:37.183 "bdev_wait_for_examine", 00:10:37.183 "bdev_set_options", 00:10:37.183 "accel_get_stats", 00:10:37.183 "accel_set_options", 00:10:37.183 "accel_set_driver", 00:10:37.183 "accel_crypto_key_destroy", 00:10:37.183 "accel_crypto_keys_get", 00:10:37.183 "accel_crypto_key_create", 00:10:37.183 "accel_assign_opc", 00:10:37.183 "accel_get_module_info", 00:10:37.183 "accel_get_opc_assignments", 00:10:37.183 "vmd_rescan", 00:10:37.183 "vmd_remove_device", 00:10:37.183 "vmd_enable", 00:10:37.183 "sock_get_default_impl", 00:10:37.183 "sock_set_default_impl", 00:10:37.183 "sock_impl_set_options", 00:10:37.183 "sock_impl_get_options", 00:10:37.183 "iobuf_get_stats", 00:10:37.183 "iobuf_set_options", 00:10:37.183 "keyring_get_keys", 00:10:37.183 "framework_get_pci_devices", 00:10:37.183 "framework_get_config", 00:10:37.183 "framework_get_subsystems", 00:10:37.183 "fsdev_set_opts", 00:10:37.183 "fsdev_get_opts", 00:10:37.183 "trace_get_info", 00:10:37.183 "trace_get_tpoint_group_mask", 00:10:37.183 "trace_disable_tpoint_group", 00:10:37.183 "trace_enable_tpoint_group", 00:10:37.183 "trace_clear_tpoint_mask", 00:10:37.183 "trace_set_tpoint_mask", 00:10:37.183 "notify_get_notifications", 00:10:37.183 "notify_get_types", 00:10:37.183 "spdk_get_version", 00:10:37.183 "rpc_get_methods" 00:10:37.183 ] 00:10:37.183 10:02:07 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:10:37.183 10:02:07 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:37.183 10:02:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:37.183 10:02:07 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:37.183 10:02:07 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59100 00:10:37.183 10:02:07 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 59100 ']' 00:10:37.183 10:02:07 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 59100 00:10:37.183 10:02:07 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:10:37.183 10:02:07 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:37.183 10:02:07 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59100 00:10:37.183 killing process with pid 59100 00:10:37.183 10:02:07 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:37.183 10:02:07 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:37.183 10:02:07 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59100' 00:10:37.183 10:02:07 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 59100 00:10:37.183 10:02:07 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 59100 00:10:39.714 ************************************ 00:10:39.714 END TEST spdkcli_tcp 00:10:39.714 ************************************ 00:10:39.714 00:10:39.714 real 0m4.578s 00:10:39.714 user 0m8.094s 00:10:39.714 sys 0m0.778s 00:10:39.714 10:02:10 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:39.714 10:02:10 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:39.972 10:02:10 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:10:39.972 10:02:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:39.972 10:02:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:39.972 10:02:10 -- common/autotest_common.sh@10 -- # set +x 00:10:39.972 ************************************ 00:10:39.972 START TEST dpdk_mem_utility 00:10:39.972 ************************************ 00:10:39.972 10:02:10 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:10:39.972 * Looking for test storage... 00:10:39.972 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:10:39.972 10:02:10 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:39.972 10:02:10 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:10:39.972 10:02:10 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:39.972 10:02:10 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:39.972 10:02:10 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:39.972 10:02:10 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:39.972 10:02:10 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:39.972 10:02:10 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:10:39.972 10:02:10 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:10:39.972 10:02:10 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:10:39.972 10:02:10 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:10:39.972 10:02:10 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:10:39.972 10:02:10 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:10:39.972 10:02:10 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:10:39.972 10:02:10 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:39.972 10:02:10 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:10:39.972 10:02:10 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:10:39.972 10:02:10 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:39.972 10:02:10 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:39.972 10:02:10 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:10:39.972 10:02:10 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:10:39.972 10:02:10 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:39.972 10:02:10 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:10:39.972 10:02:10 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:10:39.972 10:02:10 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:10:39.972 10:02:10 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:10:39.972 10:02:10 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:39.972 10:02:10 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:10:39.972 10:02:10 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:10:39.972 10:02:10 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:39.972 10:02:10 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:39.972 10:02:10 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:10:39.972 10:02:10 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:39.972 10:02:10 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:39.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.972 --rc genhtml_branch_coverage=1 00:10:39.972 --rc genhtml_function_coverage=1 00:10:39.972 --rc genhtml_legend=1 00:10:39.972 --rc geninfo_all_blocks=1 00:10:39.972 --rc geninfo_unexecuted_blocks=1 00:10:39.972 00:10:39.972 ' 00:10:39.972 10:02:10 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:39.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.972 --rc genhtml_branch_coverage=1 00:10:39.972 --rc genhtml_function_coverage=1 00:10:39.972 --rc genhtml_legend=1 00:10:39.972 --rc geninfo_all_blocks=1 00:10:39.972 --rc geninfo_unexecuted_blocks=1 00:10:39.972 00:10:39.972 ' 00:10:39.972 10:02:10 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:39.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.973 --rc genhtml_branch_coverage=1 00:10:39.973 --rc genhtml_function_coverage=1 00:10:39.973 --rc genhtml_legend=1 00:10:39.973 --rc geninfo_all_blocks=1 00:10:39.973 --rc geninfo_unexecuted_blocks=1 00:10:39.973 00:10:39.973 ' 00:10:39.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:39.973 10:02:10 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:39.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.973 --rc genhtml_branch_coverage=1 00:10:39.973 --rc genhtml_function_coverage=1 00:10:39.973 --rc genhtml_legend=1 00:10:39.973 --rc geninfo_all_blocks=1 00:10:39.973 --rc geninfo_unexecuted_blocks=1 00:10:39.973 00:10:39.973 ' 00:10:39.973 10:02:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:10:39.973 10:02:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59228 00:10:39.973 10:02:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59228 00:10:39.973 10:02:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:39.973 10:02:10 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 59228 ']' 00:10:39.973 10:02:10 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:39.973 10:02:10 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:39.973 10:02:10 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:39.973 10:02:10 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:39.973 10:02:10 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:40.231 [2024-12-09 10:02:10.835998] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:10:40.231 [2024-12-09 10:02:10.836474] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59228 ] 00:10:40.231 [2024-12-09 10:02:11.020193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:40.488 [2024-12-09 10:02:11.194691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.422 10:02:12 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:41.422 10:02:12 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:10:41.422 10:02:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:10:41.422 10:02:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:10:41.422 10:02:12 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.422 10:02:12 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:41.422 { 00:10:41.422 "filename": "/tmp/spdk_mem_dump.txt" 00:10:41.422 } 00:10:41.422 10:02:12 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.422 10:02:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:10:41.682 DPDK memory size 824.000000 MiB in 1 heap(s) 00:10:41.682 1 heaps totaling size 824.000000 MiB 00:10:41.682 size: 824.000000 MiB heap id: 0 00:10:41.682 end heaps---------- 00:10:41.683 9 mempools totaling size 603.782043 MiB 00:10:41.683 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:10:41.683 size: 158.602051 MiB name: PDU_data_out_Pool 00:10:41.683 size: 100.555481 MiB name: bdev_io_59228 00:10:41.683 size: 50.003479 MiB name: msgpool_59228 00:10:41.683 size: 36.509338 MiB name: fsdev_io_59228 00:10:41.683 size: 21.763794 MiB name: PDU_Pool 00:10:41.683 size: 19.513306 MiB name: SCSI_TASK_Pool 00:10:41.683 size: 4.133484 MiB name: evtpool_59228 00:10:41.683 size: 0.026123 MiB name: Session_Pool 00:10:41.683 end mempools------- 00:10:41.683 6 memzones totaling size 4.142822 MiB 00:10:41.683 size: 1.000366 MiB name: RG_ring_0_59228 00:10:41.683 size: 1.000366 MiB name: RG_ring_1_59228 00:10:41.683 size: 1.000366 MiB name: RG_ring_4_59228 00:10:41.683 size: 1.000366 MiB name: RG_ring_5_59228 00:10:41.683 size: 0.125366 MiB name: RG_ring_2_59228 00:10:41.683 size: 0.015991 MiB name: RG_ring_3_59228 00:10:41.683 end memzones------- 00:10:41.683 10:02:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:10:41.683 heap id: 0 total size: 824.000000 MiB number of busy elements: 313 number of free elements: 18 00:10:41.683 list of free elements. size: 16.781860 MiB 00:10:41.683 element at address: 0x200006400000 with size: 1.995972 MiB 00:10:41.683 element at address: 0x20000a600000 with size: 1.995972 MiB 00:10:41.683 element at address: 0x200003e00000 with size: 1.991028 MiB 00:10:41.683 element at address: 0x200019500040 with size: 0.999939 MiB 00:10:41.683 element at address: 0x200019900040 with size: 0.999939 MiB 00:10:41.683 element at address: 0x200019a00000 with size: 0.999084 MiB 00:10:41.683 element at address: 0x200032600000 with size: 0.994324 MiB 00:10:41.683 element at address: 0x200000400000 with size: 0.992004 MiB 00:10:41.683 element at address: 0x200019200000 with size: 0.959656 MiB 00:10:41.683 element at address: 0x200019d00040 with size: 0.936401 MiB 00:10:41.683 element at address: 0x200000200000 with size: 0.716980 MiB 00:10:41.683 element at address: 0x20001b400000 with size: 0.563171 MiB 00:10:41.683 element at address: 0x200000c00000 with size: 0.489197 MiB 00:10:41.683 element at address: 0x200019600000 with size: 0.487976 MiB 00:10:41.683 element at address: 0x200019e00000 with size: 0.485413 MiB 00:10:41.683 element at address: 0x200012c00000 with size: 0.433472 MiB 00:10:41.683 element at address: 0x200028800000 with size: 0.390442 MiB 00:10:41.683 element at address: 0x200000800000 with size: 0.350891 MiB 00:10:41.683 list of standard malloc elements. size: 199.287231 MiB 00:10:41.683 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:10:41.683 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:10:41.683 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:10:41.683 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:10:41.683 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:10:41.683 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:10:41.683 element at address: 0x200019deff40 with size: 0.062683 MiB 00:10:41.683 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:10:41.683 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:10:41.683 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:10:41.683 element at address: 0x200012bff040 with size: 0.000305 MiB 00:10:41.683 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:10:41.683 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:10:41.683 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:10:41.683 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:10:41.683 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:10:41.683 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:10:41.683 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:10:41.683 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:10:41.683 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:10:41.683 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:10:41.683 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:10:41.683 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:10:41.683 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:10:41.683 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:10:41.683 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:10:41.683 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:10:41.683 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:10:41.683 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:10:41.683 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:10:41.683 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:10:41.683 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:10:41.683 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:10:41.683 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:10:41.683 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:10:41.683 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:10:41.683 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:10:41.683 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:10:41.683 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:10:41.683 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:10:41.683 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:10:41.683 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:10:41.683 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:10:41.683 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:10:41.683 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:10:41.683 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:10:41.683 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:10:41.683 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:10:41.683 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:10:41.683 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:10:41.683 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:10:41.683 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:10:41.683 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:10:41.683 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:10:41.683 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:10:41.683 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:10:41.683 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:10:41.683 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:10:41.683 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:10:41.683 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:10:41.683 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:10:41.683 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:10:41.683 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:10:41.683 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:10:41.683 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:10:41.683 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:10:41.683 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:10:41.683 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:10:41.683 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:10:41.683 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:10:41.683 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:10:41.683 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:10:41.683 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:10:41.683 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:10:41.683 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:10:41.683 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:10:41.683 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:10:41.683 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:10:41.683 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:10:41.683 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:10:41.683 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:10:41.683 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:10:41.683 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:10:41.683 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:10:41.683 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:10:41.683 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:10:41.683 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:10:41.683 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:10:41.683 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:10:41.683 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:10:41.683 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:10:41.683 element at address: 0x200000cff000 with size: 0.000244 MiB 00:10:41.683 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:10:41.683 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:10:41.683 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:10:41.683 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:10:41.683 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:10:41.683 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:10:41.683 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:10:41.683 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:10:41.683 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:10:41.683 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:10:41.683 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:10:41.683 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:10:41.683 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:10:41.683 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:10:41.683 element at address: 0x200012bff180 with size: 0.000244 MiB 00:10:41.683 element at address: 0x200012bff280 with size: 0.000244 MiB 00:10:41.683 element at address: 0x200012bff380 with size: 0.000244 MiB 00:10:41.683 element at address: 0x200012bff480 with size: 0.000244 MiB 00:10:41.683 element at address: 0x200012bff580 with size: 0.000244 MiB 00:10:41.683 element at address: 0x200012bff680 with size: 0.000244 MiB 00:10:41.683 element at address: 0x200012bff780 with size: 0.000244 MiB 00:10:41.684 element at address: 0x200012bff880 with size: 0.000244 MiB 00:10:41.684 element at address: 0x200012bff980 with size: 0.000244 MiB 00:10:41.684 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:10:41.684 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:10:41.684 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:10:41.684 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:10:41.684 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:10:41.684 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:10:41.684 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:10:41.684 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:10:41.684 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:10:41.684 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:10:41.684 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:10:41.684 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:10:41.684 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:10:41.684 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:10:41.684 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:10:41.684 element at address: 0x200019affc40 with size: 0.000244 MiB 00:10:41.684 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:10:41.684 element at address: 0x200028863f40 with size: 0.000244 MiB 00:10:41.684 element at address: 0x200028864040 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20002886af80 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20002886b080 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20002886b180 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20002886b280 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20002886b380 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20002886b480 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20002886b580 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20002886b680 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20002886b780 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20002886b880 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20002886b980 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20002886be80 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20002886c080 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20002886c180 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20002886c280 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20002886c380 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20002886c480 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20002886c580 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20002886c680 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20002886c780 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20002886c880 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20002886c980 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20002886d080 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20002886d180 with size: 0.000244 MiB 00:10:41.684 element at address: 0x20002886d280 with size: 0.000244 MiB 00:10:41.685 element at address: 0x20002886d380 with size: 0.000244 MiB 00:10:41.685 element at address: 0x20002886d480 with size: 0.000244 MiB 00:10:41.685 element at address: 0x20002886d580 with size: 0.000244 MiB 00:10:41.685 element at address: 0x20002886d680 with size: 0.000244 MiB 00:10:41.685 element at address: 0x20002886d780 with size: 0.000244 MiB 00:10:41.685 element at address: 0x20002886d880 with size: 0.000244 MiB 00:10:41.685 element at address: 0x20002886d980 with size: 0.000244 MiB 00:10:41.685 element at address: 0x20002886da80 with size: 0.000244 MiB 00:10:41.685 element at address: 0x20002886db80 with size: 0.000244 MiB 00:10:41.685 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:10:41.685 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:10:41.685 element at address: 0x20002886de80 with size: 0.000244 MiB 00:10:41.685 element at address: 0x20002886df80 with size: 0.000244 MiB 00:10:41.685 element at address: 0x20002886e080 with size: 0.000244 MiB 00:10:41.685 element at address: 0x20002886e180 with size: 0.000244 MiB 00:10:41.685 element at address: 0x20002886e280 with size: 0.000244 MiB 00:10:41.685 element at address: 0x20002886e380 with size: 0.000244 MiB 00:10:41.685 element at address: 0x20002886e480 with size: 0.000244 MiB 00:10:41.685 element at address: 0x20002886e580 with size: 0.000244 MiB 00:10:41.685 element at address: 0x20002886e680 with size: 0.000244 MiB 00:10:41.685 element at address: 0x20002886e780 with size: 0.000244 MiB 00:10:41.685 element at address: 0x20002886e880 with size: 0.000244 MiB 00:10:41.685 element at address: 0x20002886e980 with size: 0.000244 MiB 00:10:41.685 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:10:41.685 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:10:41.685 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:10:41.685 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:10:41.685 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:10:41.685 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:10:41.685 element at address: 0x20002886f080 with size: 0.000244 MiB 00:10:41.685 element at address: 0x20002886f180 with size: 0.000244 MiB 00:10:41.685 element at address: 0x20002886f280 with size: 0.000244 MiB 00:10:41.685 element at address: 0x20002886f380 with size: 0.000244 MiB 00:10:41.685 element at address: 0x20002886f480 with size: 0.000244 MiB 00:10:41.685 element at address: 0x20002886f580 with size: 0.000244 MiB 00:10:41.685 element at address: 0x20002886f680 with size: 0.000244 MiB 00:10:41.685 element at address: 0x20002886f780 with size: 0.000244 MiB 00:10:41.685 element at address: 0x20002886f880 with size: 0.000244 MiB 00:10:41.685 element at address: 0x20002886f980 with size: 0.000244 MiB 00:10:41.685 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:10:41.685 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:10:41.685 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:10:41.685 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:10:41.685 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:10:41.685 list of memzone associated elements. size: 607.930908 MiB 00:10:41.685 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:10:41.685 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:10:41.685 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:10:41.685 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:10:41.685 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:10:41.685 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_59228_0 00:10:41.685 element at address: 0x200000dff340 with size: 48.003113 MiB 00:10:41.685 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59228_0 00:10:41.685 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:10:41.685 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_59228_0 00:10:41.685 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:10:41.685 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:10:41.685 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:10:41.685 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:10:41.685 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:10:41.685 associated memzone info: size: 3.000122 MiB name: MP_evtpool_59228_0 00:10:41.685 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:10:41.685 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59228 00:10:41.685 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:10:41.685 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59228 00:10:41.685 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:10:41.685 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:10:41.685 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:10:41.685 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:10:41.685 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:10:41.685 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:10:41.685 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:10:41.685 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:10:41.685 element at address: 0x200000cff100 with size: 1.000549 MiB 00:10:41.685 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59228 00:10:41.685 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:10:41.685 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59228 00:10:41.685 element at address: 0x200019affd40 with size: 1.000549 MiB 00:10:41.685 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59228 00:10:41.685 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:10:41.685 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59228 00:10:41.685 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:10:41.685 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_59228 00:10:41.685 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:10:41.685 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59228 00:10:41.685 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:10:41.685 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:10:41.685 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:10:41.685 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:10:41.685 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:10:41.685 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:10:41.685 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:10:41.685 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_59228 00:10:41.685 element at address: 0x20000085df80 with size: 0.125549 MiB 00:10:41.685 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59228 00:10:41.685 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:10:41.685 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:10:41.685 element at address: 0x200028864140 with size: 0.023804 MiB 00:10:41.685 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:10:41.685 element at address: 0x200000859d40 with size: 0.016174 MiB 00:10:41.685 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59228 00:10:41.685 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:10:41.685 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:10:41.685 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:10:41.685 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59228 00:10:41.685 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:10:41.685 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_59228 00:10:41.685 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:10:41.685 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59228 00:10:41.685 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:10:41.685 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:10:41.685 10:02:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:10:41.685 10:02:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59228 00:10:41.685 10:02:12 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 59228 ']' 00:10:41.685 10:02:12 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 59228 00:10:41.685 10:02:12 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:10:41.685 10:02:12 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:41.685 10:02:12 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59228 00:10:41.685 10:02:12 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:41.685 killing process with pid 59228 00:10:41.685 10:02:12 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:41.685 10:02:12 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59228' 00:10:41.685 10:02:12 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 59228 00:10:41.685 10:02:12 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 59228 00:10:44.218 00:10:44.218 real 0m4.395s 00:10:44.218 user 0m4.320s 00:10:44.218 sys 0m0.740s 00:10:44.218 10:02:14 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:44.218 10:02:14 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:44.218 ************************************ 00:10:44.218 END TEST dpdk_mem_utility 00:10:44.218 ************************************ 00:10:44.218 10:02:14 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:10:44.218 10:02:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:44.218 10:02:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:44.218 10:02:14 -- common/autotest_common.sh@10 -- # set +x 00:10:44.218 ************************************ 00:10:44.218 START TEST event 00:10:44.218 ************************************ 00:10:44.218 10:02:14 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:10:44.476 * Looking for test storage... 00:10:44.476 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:10:44.476 10:02:15 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:44.476 10:02:15 event -- common/autotest_common.sh@1711 -- # lcov --version 00:10:44.476 10:02:15 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:44.476 10:02:15 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:44.476 10:02:15 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:44.476 10:02:15 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:44.476 10:02:15 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:44.476 10:02:15 event -- scripts/common.sh@336 -- # IFS=.-: 00:10:44.476 10:02:15 event -- scripts/common.sh@336 -- # read -ra ver1 00:10:44.476 10:02:15 event -- scripts/common.sh@337 -- # IFS=.-: 00:10:44.476 10:02:15 event -- scripts/common.sh@337 -- # read -ra ver2 00:10:44.476 10:02:15 event -- scripts/common.sh@338 -- # local 'op=<' 00:10:44.476 10:02:15 event -- scripts/common.sh@340 -- # ver1_l=2 00:10:44.476 10:02:15 event -- scripts/common.sh@341 -- # ver2_l=1 00:10:44.476 10:02:15 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:44.476 10:02:15 event -- scripts/common.sh@344 -- # case "$op" in 00:10:44.476 10:02:15 event -- scripts/common.sh@345 -- # : 1 00:10:44.476 10:02:15 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:44.476 10:02:15 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:44.476 10:02:15 event -- scripts/common.sh@365 -- # decimal 1 00:10:44.476 10:02:15 event -- scripts/common.sh@353 -- # local d=1 00:10:44.476 10:02:15 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:44.476 10:02:15 event -- scripts/common.sh@355 -- # echo 1 00:10:44.476 10:02:15 event -- scripts/common.sh@365 -- # ver1[v]=1 00:10:44.476 10:02:15 event -- scripts/common.sh@366 -- # decimal 2 00:10:44.476 10:02:15 event -- scripts/common.sh@353 -- # local d=2 00:10:44.476 10:02:15 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:44.476 10:02:15 event -- scripts/common.sh@355 -- # echo 2 00:10:44.476 10:02:15 event -- scripts/common.sh@366 -- # ver2[v]=2 00:10:44.476 10:02:15 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:44.476 10:02:15 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:44.476 10:02:15 event -- scripts/common.sh@368 -- # return 0 00:10:44.476 10:02:15 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:44.476 10:02:15 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:44.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.476 --rc genhtml_branch_coverage=1 00:10:44.476 --rc genhtml_function_coverage=1 00:10:44.476 --rc genhtml_legend=1 00:10:44.476 --rc geninfo_all_blocks=1 00:10:44.476 --rc geninfo_unexecuted_blocks=1 00:10:44.476 00:10:44.476 ' 00:10:44.476 10:02:15 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:44.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.476 --rc genhtml_branch_coverage=1 00:10:44.476 --rc genhtml_function_coverage=1 00:10:44.476 --rc genhtml_legend=1 00:10:44.476 --rc geninfo_all_blocks=1 00:10:44.476 --rc geninfo_unexecuted_blocks=1 00:10:44.476 00:10:44.476 ' 00:10:44.476 10:02:15 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:44.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.476 --rc genhtml_branch_coverage=1 00:10:44.476 --rc genhtml_function_coverage=1 00:10:44.476 --rc genhtml_legend=1 00:10:44.476 --rc geninfo_all_blocks=1 00:10:44.476 --rc geninfo_unexecuted_blocks=1 00:10:44.476 00:10:44.476 ' 00:10:44.476 10:02:15 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:44.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.476 --rc genhtml_branch_coverage=1 00:10:44.476 --rc genhtml_function_coverage=1 00:10:44.476 --rc genhtml_legend=1 00:10:44.476 --rc geninfo_all_blocks=1 00:10:44.476 --rc geninfo_unexecuted_blocks=1 00:10:44.476 00:10:44.476 ' 00:10:44.476 10:02:15 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:10:44.476 10:02:15 event -- bdev/nbd_common.sh@6 -- # set -e 00:10:44.476 10:02:15 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:10:44.476 10:02:15 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:10:44.476 10:02:15 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:44.476 10:02:15 event -- common/autotest_common.sh@10 -- # set +x 00:10:44.476 ************************************ 00:10:44.476 START TEST event_perf 00:10:44.476 ************************************ 00:10:44.476 10:02:15 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:10:44.476 Running I/O for 1 seconds...[2024-12-09 10:02:15.205685] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:10:44.476 [2024-12-09 10:02:15.206228] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59341 ] 00:10:44.735 [2024-12-09 10:02:15.396686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:44.993 [2024-12-09 10:02:15.592234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:44.993 [2024-12-09 10:02:15.592429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:44.993 [2024-12-09 10:02:15.592586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:44.993 [2024-12-09 10:02:15.592576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.368 Running I/O for 1 seconds... 00:10:46.368 lcore 0: 133383 00:10:46.368 lcore 1: 133382 00:10:46.368 lcore 2: 133382 00:10:46.368 lcore 3: 133382 00:10:46.368 done. 00:10:46.368 00:10:46.368 real 0m1.767s 00:10:46.368 user 0m4.507s 00:10:46.368 sys 0m0.130s 00:10:46.368 10:02:16 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:46.368 10:02:16 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:10:46.368 ************************************ 00:10:46.368 END TEST event_perf 00:10:46.368 ************************************ 00:10:46.368 10:02:16 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:10:46.368 10:02:16 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:46.368 10:02:16 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:46.368 10:02:16 event -- common/autotest_common.sh@10 -- # set +x 00:10:46.368 ************************************ 00:10:46.368 START TEST event_reactor 00:10:46.368 ************************************ 00:10:46.368 10:02:16 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:10:46.368 [2024-12-09 10:02:17.029481] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:10:46.368 [2024-12-09 10:02:17.029674] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59381 ] 00:10:46.627 [2024-12-09 10:02:17.212471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:46.627 [2024-12-09 10:02:17.397457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.061 test_start 00:10:48.061 oneshot 00:10:48.061 tick 100 00:10:48.061 tick 100 00:10:48.061 tick 250 00:10:48.061 tick 100 00:10:48.061 tick 100 00:10:48.061 tick 100 00:10:48.061 tick 250 00:10:48.061 tick 500 00:10:48.061 tick 100 00:10:48.061 tick 100 00:10:48.061 tick 250 00:10:48.061 tick 100 00:10:48.061 tick 100 00:10:48.061 test_end 00:10:48.061 00:10:48.061 real 0m1.732s 00:10:48.061 user 0m1.523s 00:10:48.061 sys 0m0.098s 00:10:48.061 10:02:18 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:48.061 10:02:18 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:10:48.061 ************************************ 00:10:48.061 END TEST event_reactor 00:10:48.061 ************************************ 00:10:48.061 10:02:18 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:10:48.061 10:02:18 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:48.061 10:02:18 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:48.061 10:02:18 event -- common/autotest_common.sh@10 -- # set +x 00:10:48.061 ************************************ 00:10:48.061 START TEST event_reactor_perf 00:10:48.061 ************************************ 00:10:48.061 10:02:18 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:10:48.061 [2024-12-09 10:02:18.828096] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:10:48.061 [2024-12-09 10:02:18.828711] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59417 ] 00:10:48.320 [2024-12-09 10:02:19.031965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.579 [2024-12-09 10:02:19.191478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.956 test_start 00:10:49.956 test_end 00:10:49.956 Performance: 270529 events per second 00:10:49.956 00:10:49.956 real 0m1.751s 00:10:49.956 user 0m1.516s 00:10:49.956 sys 0m0.122s 00:10:49.956 ************************************ 00:10:49.956 END TEST event_reactor_perf 00:10:49.956 ************************************ 00:10:49.956 10:02:20 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:49.956 10:02:20 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:10:49.956 10:02:20 event -- event/event.sh@49 -- # uname -s 00:10:49.956 10:02:20 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:10:49.956 10:02:20 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:10:49.956 10:02:20 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:49.956 10:02:20 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:49.956 10:02:20 event -- common/autotest_common.sh@10 -- # set +x 00:10:49.956 ************************************ 00:10:49.956 START TEST event_scheduler 00:10:49.956 ************************************ 00:10:49.956 10:02:20 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:10:49.956 * Looking for test storage... 00:10:49.956 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:10:49.956 10:02:20 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:49.956 10:02:20 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:10:49.956 10:02:20 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:49.956 10:02:20 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:49.956 10:02:20 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:49.956 10:02:20 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:49.956 10:02:20 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:49.956 10:02:20 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:10:49.956 10:02:20 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:10:49.956 10:02:20 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:10:49.956 10:02:20 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:10:49.956 10:02:20 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:10:49.956 10:02:20 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:10:49.956 10:02:20 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:10:49.956 10:02:20 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:49.956 10:02:20 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:10:49.956 10:02:20 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:10:49.956 10:02:20 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:49.956 10:02:20 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:49.956 10:02:20 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:10:50.216 10:02:20 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:10:50.216 10:02:20 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:50.216 10:02:20 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:10:50.216 10:02:20 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:10:50.216 10:02:20 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:10:50.216 10:02:20 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:10:50.216 10:02:20 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:50.216 10:02:20 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:10:50.216 10:02:20 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:10:50.216 10:02:20 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:50.216 10:02:20 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:50.216 10:02:20 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:10:50.216 10:02:20 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:50.216 10:02:20 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:50.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.216 --rc genhtml_branch_coverage=1 00:10:50.216 --rc genhtml_function_coverage=1 00:10:50.216 --rc genhtml_legend=1 00:10:50.216 --rc geninfo_all_blocks=1 00:10:50.216 --rc geninfo_unexecuted_blocks=1 00:10:50.216 00:10:50.216 ' 00:10:50.216 10:02:20 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:50.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.216 --rc genhtml_branch_coverage=1 00:10:50.216 --rc genhtml_function_coverage=1 00:10:50.216 --rc genhtml_legend=1 00:10:50.216 --rc geninfo_all_blocks=1 00:10:50.216 --rc geninfo_unexecuted_blocks=1 00:10:50.216 00:10:50.216 ' 00:10:50.216 10:02:20 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:50.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.216 --rc genhtml_branch_coverage=1 00:10:50.216 --rc genhtml_function_coverage=1 00:10:50.216 --rc genhtml_legend=1 00:10:50.216 --rc geninfo_all_blocks=1 00:10:50.216 --rc geninfo_unexecuted_blocks=1 00:10:50.216 00:10:50.216 ' 00:10:50.216 10:02:20 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:50.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.216 --rc genhtml_branch_coverage=1 00:10:50.216 --rc genhtml_function_coverage=1 00:10:50.216 --rc genhtml_legend=1 00:10:50.216 --rc geninfo_all_blocks=1 00:10:50.216 --rc geninfo_unexecuted_blocks=1 00:10:50.216 00:10:50.216 ' 00:10:50.216 10:02:20 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:10:50.216 10:02:20 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59493 00:10:50.216 10:02:20 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:10:50.216 10:02:20 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59493 00:10:50.216 10:02:20 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 59493 ']' 00:10:50.216 10:02:20 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:50.216 10:02:20 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:10:50.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:50.216 10:02:20 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:50.216 10:02:20 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:50.216 10:02:20 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:50.216 10:02:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:50.216 [2024-12-09 10:02:20.862396] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:10:50.216 [2024-12-09 10:02:20.862593] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59493 ] 00:10:50.475 [2024-12-09 10:02:21.042013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:50.475 [2024-12-09 10:02:21.201518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.475 [2024-12-09 10:02:21.201664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:50.475 [2024-12-09 10:02:21.201774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:50.475 [2024-12-09 10:02:21.201801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:51.410 10:02:21 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:51.410 10:02:21 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:10:51.410 10:02:21 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:10:51.410 10:02:21 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.410 10:02:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:51.410 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:51.410 POWER: Cannot set governor of lcore 0 to userspace 00:10:51.410 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:51.410 POWER: Cannot set governor of lcore 0 to performance 00:10:51.410 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:51.410 POWER: Cannot set governor of lcore 0 to userspace 00:10:51.410 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:51.410 POWER: Cannot set governor of lcore 0 to userspace 00:10:51.410 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:10:51.410 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:10:51.410 POWER: Unable to set Power Management Environment for lcore 0 00:10:51.410 [2024-12-09 10:02:21.968132] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:10:51.410 [2024-12-09 10:02:21.968165] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:10:51.410 [2024-12-09 10:02:21.968181] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:10:51.410 [2024-12-09 10:02:21.968207] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:10:51.410 [2024-12-09 10:02:21.968221] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:10:51.410 [2024-12-09 10:02:21.968235] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:10:51.410 10:02:21 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.410 10:02:21 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:10:51.410 10:02:21 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.410 10:02:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:51.669 [2024-12-09 10:02:22.347068] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:10:51.669 10:02:22 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.669 10:02:22 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:10:51.669 10:02:22 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:51.669 10:02:22 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:51.669 10:02:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:51.669 ************************************ 00:10:51.669 START TEST scheduler_create_thread 00:10:51.669 ************************************ 00:10:51.669 10:02:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:10:51.669 10:02:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:10:51.669 10:02:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.669 10:02:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:51.669 2 00:10:51.669 10:02:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.669 10:02:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:10:51.669 10:02:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.669 10:02:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:51.669 3 00:10:51.669 10:02:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.669 10:02:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:10:51.669 10:02:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.669 10:02:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:51.669 4 00:10:51.669 10:02:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.669 10:02:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:10:51.669 10:02:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.669 10:02:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:51.669 5 00:10:51.669 10:02:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.669 10:02:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:10:51.669 10:02:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.670 10:02:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:51.670 6 00:10:51.670 10:02:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.670 10:02:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:10:51.670 10:02:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.670 10:02:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:51.670 7 00:10:51.670 10:02:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.670 10:02:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:10:51.670 10:02:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.670 10:02:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:51.670 8 00:10:51.670 10:02:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.670 10:02:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:10:51.670 10:02:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.670 10:02:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:51.670 9 00:10:51.670 10:02:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.670 10:02:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:10:51.670 10:02:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.670 10:02:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:51.670 10 00:10:51.670 10:02:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.670 10:02:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:10:51.670 10:02:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.670 10:02:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:51.670 10:02:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.670 10:02:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:10:51.670 10:02:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:10:51.670 10:02:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.670 10:02:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:51.930 10:02:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.930 10:02:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:10:51.930 10:02:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.930 10:02:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:53.310 10:02:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.310 10:02:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:10:53.310 10:02:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:10:53.310 10:02:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.310 10:02:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:54.245 10:02:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.245 00:10:54.245 real 0m2.626s 00:10:54.245 user 0m0.018s 00:10:54.245 sys 0m0.006s 00:10:54.245 10:02:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:54.245 10:02:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:54.245 ************************************ 00:10:54.245 END TEST scheduler_create_thread 00:10:54.245 ************************************ 00:10:54.245 10:02:25 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:10:54.245 10:02:25 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59493 00:10:54.245 10:02:25 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 59493 ']' 00:10:54.245 10:02:25 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 59493 00:10:54.245 10:02:25 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:10:54.245 10:02:25 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:54.506 10:02:25 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59493 00:10:54.506 10:02:25 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:10:54.506 10:02:25 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:10:54.506 killing process with pid 59493 00:10:54.506 10:02:25 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59493' 00:10:54.506 10:02:25 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 59493 00:10:54.506 10:02:25 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 59493 00:10:54.764 [2024-12-09 10:02:25.467185] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:10:56.138 00:10:56.138 real 0m6.151s 00:10:56.138 user 0m10.957s 00:10:56.138 sys 0m0.614s 00:10:56.138 10:02:26 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:56.138 ************************************ 00:10:56.138 END TEST event_scheduler 00:10:56.138 ************************************ 00:10:56.138 10:02:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:56.138 10:02:26 event -- event/event.sh@51 -- # modprobe -n nbd 00:10:56.138 10:02:26 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:10:56.138 10:02:26 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:56.138 10:02:26 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:56.138 10:02:26 event -- common/autotest_common.sh@10 -- # set +x 00:10:56.138 ************************************ 00:10:56.138 START TEST app_repeat 00:10:56.138 ************************************ 00:10:56.138 10:02:26 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:10:56.138 10:02:26 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:56.138 10:02:26 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:56.138 10:02:26 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:10:56.138 10:02:26 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:56.138 10:02:26 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:10:56.138 10:02:26 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:10:56.138 10:02:26 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:10:56.138 10:02:26 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59610 00:10:56.138 10:02:26 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:10:56.138 Process app_repeat pid: 59610 00:10:56.138 10:02:26 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59610' 00:10:56.138 10:02:26 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:10:56.138 10:02:26 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:56.138 spdk_app_start Round 0 00:10:56.138 10:02:26 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:10:56.138 10:02:26 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59610 /var/tmp/spdk-nbd.sock 00:10:56.138 10:02:26 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59610 ']' 00:10:56.138 10:02:26 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:56.138 10:02:26 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:56.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:56.138 10:02:26 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:56.138 10:02:26 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:56.138 10:02:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:56.138 [2024-12-09 10:02:26.853371] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:10:56.138 [2024-12-09 10:02:26.853569] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59610 ] 00:10:56.396 [2024-12-09 10:02:27.040804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:56.655 [2024-12-09 10:02:27.202637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.655 [2024-12-09 10:02:27.202655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:57.238 10:02:27 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:57.238 10:02:27 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:10:57.238 10:02:27 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:57.533 Malloc0 00:10:57.533 10:02:28 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:58.099 Malloc1 00:10:58.099 10:02:28 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:58.099 10:02:28 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:58.099 10:02:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:58.099 10:02:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:58.099 10:02:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:58.099 10:02:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:58.099 10:02:28 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:58.099 10:02:28 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:58.099 10:02:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:58.099 10:02:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:58.099 10:02:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:58.099 10:02:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:58.099 10:02:28 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:58.099 10:02:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:58.099 10:02:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:58.099 10:02:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:58.357 /dev/nbd0 00:10:58.357 10:02:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:58.357 10:02:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:58.357 10:02:28 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:58.357 10:02:28 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:58.357 10:02:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:58.357 10:02:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:58.357 10:02:28 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:58.357 10:02:28 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:58.357 10:02:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:58.357 10:02:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:58.357 10:02:28 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:58.357 1+0 records in 00:10:58.357 1+0 records out 00:10:58.357 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000336504 s, 12.2 MB/s 00:10:58.357 10:02:28 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:58.357 10:02:28 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:58.357 10:02:28 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:58.357 10:02:28 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:58.357 10:02:28 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:58.357 10:02:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:58.357 10:02:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:58.357 10:02:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:58.616 /dev/nbd1 00:10:58.616 10:02:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:58.616 10:02:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:58.616 10:02:29 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:58.616 10:02:29 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:58.616 10:02:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:58.616 10:02:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:58.616 10:02:29 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:58.616 10:02:29 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:58.616 10:02:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:58.616 10:02:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:58.616 10:02:29 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:58.616 1+0 records in 00:10:58.616 1+0 records out 00:10:58.616 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000346404 s, 11.8 MB/s 00:10:58.616 10:02:29 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:58.616 10:02:29 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:58.616 10:02:29 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:58.616 10:02:29 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:58.616 10:02:29 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:58.616 10:02:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:58.616 10:02:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:58.616 10:02:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:58.616 10:02:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:58.616 10:02:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:58.875 10:02:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:58.875 { 00:10:58.875 "nbd_device": "/dev/nbd0", 00:10:58.875 "bdev_name": "Malloc0" 00:10:58.875 }, 00:10:58.875 { 00:10:58.875 "nbd_device": "/dev/nbd1", 00:10:58.875 "bdev_name": "Malloc1" 00:10:58.875 } 00:10:58.875 ]' 00:10:58.875 10:02:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:58.875 { 00:10:58.875 "nbd_device": "/dev/nbd0", 00:10:58.875 "bdev_name": "Malloc0" 00:10:58.875 }, 00:10:58.875 { 00:10:58.875 "nbd_device": "/dev/nbd1", 00:10:58.875 "bdev_name": "Malloc1" 00:10:58.875 } 00:10:58.875 ]' 00:10:58.875 10:02:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:58.875 10:02:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:58.875 /dev/nbd1' 00:10:58.875 10:02:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:58.875 /dev/nbd1' 00:10:58.875 10:02:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:58.875 10:02:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:58.875 10:02:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:58.875 10:02:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:58.875 10:02:29 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:58.875 10:02:29 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:58.875 10:02:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:58.875 10:02:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:58.875 10:02:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:58.875 10:02:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:58.875 10:02:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:58.875 10:02:29 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:58.875 256+0 records in 00:10:58.875 256+0 records out 00:10:58.875 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00474401 s, 221 MB/s 00:10:58.875 10:02:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:58.875 10:02:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:58.875 256+0 records in 00:10:58.875 256+0 records out 00:10:58.875 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0333374 s, 31.5 MB/s 00:10:58.875 10:02:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:58.875 10:02:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:59.133 256+0 records in 00:10:59.133 256+0 records out 00:10:59.134 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.033637 s, 31.2 MB/s 00:10:59.134 10:02:29 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:59.134 10:02:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:59.134 10:02:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:59.134 10:02:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:59.134 10:02:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:59.134 10:02:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:59.134 10:02:29 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:59.134 10:02:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:59.134 10:02:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:10:59.134 10:02:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:59.134 10:02:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:10:59.134 10:02:29 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:59.134 10:02:29 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:59.134 10:02:29 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:59.134 10:02:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:59.134 10:02:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:59.134 10:02:29 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:59.134 10:02:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:59.134 10:02:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:59.392 10:02:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:59.392 10:02:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:59.392 10:02:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:59.392 10:02:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:59.392 10:02:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:59.392 10:02:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:59.392 10:02:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:59.392 10:02:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:59.392 10:02:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:59.392 10:02:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:59.650 10:02:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:59.650 10:02:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:59.650 10:02:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:59.650 10:02:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:59.650 10:02:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:59.650 10:02:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:59.650 10:02:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:59.650 10:02:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:59.650 10:02:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:59.650 10:02:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:59.650 10:02:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:00.217 10:02:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:00.217 10:02:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:00.217 10:02:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:00.217 10:02:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:00.217 10:02:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:00.217 10:02:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:11:00.217 10:02:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:11:00.217 10:02:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:11:00.217 10:02:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:11:00.217 10:02:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:11:00.217 10:02:30 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:00.217 10:02:30 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:11:00.217 10:02:30 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:11:00.475 10:02:31 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:11:01.850 [2024-12-09 10:02:32.464200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:01.850 [2024-12-09 10:02:32.601459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:01.850 [2024-12-09 10:02:32.601464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.108 [2024-12-09 10:02:32.797601] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:11:02.108 [2024-12-09 10:02:32.797744] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:11:03.483 10:02:34 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:11:03.483 spdk_app_start Round 1 00:11:03.483 10:02:34 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:11:03.483 10:02:34 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59610 /var/tmp/spdk-nbd.sock 00:11:03.483 10:02:34 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59610 ']' 00:11:03.483 10:02:34 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:03.483 10:02:34 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:03.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:03.483 10:02:34 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:03.483 10:02:34 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:03.483 10:02:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:03.742 10:02:34 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:03.742 10:02:34 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:11:03.742 10:02:34 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:04.330 Malloc0 00:11:04.330 10:02:34 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:04.589 Malloc1 00:11:04.589 10:02:35 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:04.589 10:02:35 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:04.589 10:02:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:04.589 10:02:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:04.589 10:02:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:04.589 10:02:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:04.589 10:02:35 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:04.589 10:02:35 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:04.589 10:02:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:04.589 10:02:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:04.589 10:02:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:04.589 10:02:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:04.589 10:02:35 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:11:04.589 10:02:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:04.589 10:02:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:04.589 10:02:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:04.847 /dev/nbd0 00:11:04.847 10:02:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:04.847 10:02:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:04.847 10:02:35 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:04.847 10:02:35 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:11:04.847 10:02:35 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:04.847 10:02:35 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:04.847 10:02:35 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:04.847 10:02:35 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:11:04.847 10:02:35 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:04.847 10:02:35 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:04.847 10:02:35 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:04.847 1+0 records in 00:11:04.847 1+0 records out 00:11:04.847 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000304736 s, 13.4 MB/s 00:11:04.847 10:02:35 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:04.847 10:02:35 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:11:04.847 10:02:35 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:04.847 10:02:35 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:04.847 10:02:35 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:11:04.847 10:02:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:04.847 10:02:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:04.847 10:02:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:11:05.105 /dev/nbd1 00:11:05.105 10:02:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:05.105 10:02:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:05.105 10:02:35 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:05.105 10:02:35 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:11:05.105 10:02:35 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:05.105 10:02:35 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:05.105 10:02:35 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:05.105 10:02:35 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:11:05.105 10:02:35 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:05.105 10:02:35 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:05.105 10:02:35 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:05.105 1+0 records in 00:11:05.105 1+0 records out 00:11:05.105 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000337788 s, 12.1 MB/s 00:11:05.105 10:02:35 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:05.105 10:02:35 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:11:05.105 10:02:35 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:05.105 10:02:35 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:05.105 10:02:35 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:11:05.105 10:02:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:05.105 10:02:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:05.105 10:02:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:05.105 10:02:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:05.105 10:02:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:05.364 10:02:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:05.364 { 00:11:05.364 "nbd_device": "/dev/nbd0", 00:11:05.364 "bdev_name": "Malloc0" 00:11:05.364 }, 00:11:05.364 { 00:11:05.364 "nbd_device": "/dev/nbd1", 00:11:05.364 "bdev_name": "Malloc1" 00:11:05.364 } 00:11:05.364 ]' 00:11:05.364 10:02:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:05.364 { 00:11:05.364 "nbd_device": "/dev/nbd0", 00:11:05.364 "bdev_name": "Malloc0" 00:11:05.364 }, 00:11:05.364 { 00:11:05.364 "nbd_device": "/dev/nbd1", 00:11:05.364 "bdev_name": "Malloc1" 00:11:05.364 } 00:11:05.364 ]' 00:11:05.364 10:02:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:05.623 10:02:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:05.623 /dev/nbd1' 00:11:05.623 10:02:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:05.623 /dev/nbd1' 00:11:05.623 10:02:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:05.623 10:02:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:11:05.623 10:02:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:11:05.623 10:02:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:11:05.623 10:02:36 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:11:05.623 10:02:36 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:11:05.623 10:02:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:05.623 10:02:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:05.623 10:02:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:05.623 10:02:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:05.623 10:02:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:05.623 10:02:36 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:11:05.623 256+0 records in 00:11:05.623 256+0 records out 00:11:05.623 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00690178 s, 152 MB/s 00:11:05.623 10:02:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:05.623 10:02:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:05.623 256+0 records in 00:11:05.623 256+0 records out 00:11:05.623 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0357247 s, 29.4 MB/s 00:11:05.623 10:02:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:05.623 10:02:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:05.623 256+0 records in 00:11:05.623 256+0 records out 00:11:05.623 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0386322 s, 27.1 MB/s 00:11:05.623 10:02:36 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:11:05.623 10:02:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:05.623 10:02:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:05.623 10:02:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:05.623 10:02:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:05.623 10:02:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:05.623 10:02:36 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:05.623 10:02:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:05.623 10:02:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:11:05.623 10:02:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:05.623 10:02:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:11:05.623 10:02:36 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:05.623 10:02:36 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:11:05.623 10:02:36 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:05.623 10:02:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:05.623 10:02:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:05.623 10:02:36 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:11:05.623 10:02:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:05.623 10:02:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:05.881 10:02:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:05.881 10:02:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:05.881 10:02:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:05.881 10:02:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:05.881 10:02:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:05.881 10:02:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:05.881 10:02:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:05.881 10:02:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:05.881 10:02:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:05.881 10:02:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:06.448 10:02:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:06.448 10:02:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:06.448 10:02:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:06.448 10:02:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:06.448 10:02:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:06.448 10:02:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:06.448 10:02:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:06.448 10:02:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:06.448 10:02:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:06.448 10:02:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:06.448 10:02:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:06.706 10:02:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:06.706 10:02:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:06.706 10:02:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:06.706 10:02:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:06.706 10:02:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:06.706 10:02:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:11:06.706 10:02:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:11:06.706 10:02:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:11:06.706 10:02:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:11:06.706 10:02:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:11:06.706 10:02:37 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:06.706 10:02:37 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:11:06.706 10:02:37 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:11:07.271 10:02:37 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:11:08.656 [2024-12-09 10:02:39.089996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:08.656 [2024-12-09 10:02:39.237908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.656 [2024-12-09 10:02:39.237929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:08.656 [2024-12-09 10:02:39.446755] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:11:08.656 [2024-12-09 10:02:39.446924] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:11:10.553 spdk_app_start Round 2 00:11:10.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:10.553 10:02:40 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:11:10.553 10:02:40 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:11:10.553 10:02:40 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59610 /var/tmp/spdk-nbd.sock 00:11:10.553 10:02:40 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59610 ']' 00:11:10.553 10:02:40 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:10.553 10:02:40 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:10.553 10:02:40 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:10.553 10:02:40 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:10.553 10:02:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:10.553 10:02:41 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:10.553 10:02:41 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:11:10.553 10:02:41 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:11.121 Malloc0 00:11:11.121 10:02:41 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:11.379 Malloc1 00:11:11.379 10:02:42 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:11.379 10:02:42 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:11.379 10:02:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:11.379 10:02:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:11.379 10:02:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:11.379 10:02:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:11.379 10:02:42 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:11.379 10:02:42 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:11.379 10:02:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:11.379 10:02:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:11.379 10:02:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:11.379 10:02:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:11.379 10:02:42 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:11:11.379 10:02:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:11.379 10:02:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:11.379 10:02:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:11.945 /dev/nbd0 00:11:11.945 10:02:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:11.945 10:02:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:11.945 10:02:42 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:11.945 10:02:42 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:11:11.945 10:02:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:11.945 10:02:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:11.945 10:02:42 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:11.945 10:02:42 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:11:11.945 10:02:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:11.945 10:02:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:11.945 10:02:42 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:11.945 1+0 records in 00:11:11.945 1+0 records out 00:11:11.945 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000316587 s, 12.9 MB/s 00:11:11.945 10:02:42 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:11.945 10:02:42 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:11:11.945 10:02:42 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:11.945 10:02:42 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:11.945 10:02:42 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:11:11.945 10:02:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:11.945 10:02:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:11.945 10:02:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:11:12.203 /dev/nbd1 00:11:12.203 10:02:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:12.203 10:02:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:12.204 10:02:42 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:12.204 10:02:42 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:11:12.204 10:02:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:12.204 10:02:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:12.204 10:02:42 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:12.204 10:02:42 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:11:12.204 10:02:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:12.204 10:02:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:12.204 10:02:42 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:12.204 1+0 records in 00:11:12.204 1+0 records out 00:11:12.204 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000378661 s, 10.8 MB/s 00:11:12.204 10:02:42 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:12.204 10:02:42 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:11:12.204 10:02:42 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:12.204 10:02:42 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:12.204 10:02:42 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:11:12.204 10:02:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:12.204 10:02:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:12.204 10:02:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:12.204 10:02:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:12.204 10:02:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:12.769 10:02:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:12.769 { 00:11:12.769 "nbd_device": "/dev/nbd0", 00:11:12.769 "bdev_name": "Malloc0" 00:11:12.769 }, 00:11:12.769 { 00:11:12.769 "nbd_device": "/dev/nbd1", 00:11:12.769 "bdev_name": "Malloc1" 00:11:12.769 } 00:11:12.769 ]' 00:11:12.769 10:02:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:12.769 { 00:11:12.769 "nbd_device": "/dev/nbd0", 00:11:12.769 "bdev_name": "Malloc0" 00:11:12.769 }, 00:11:12.769 { 00:11:12.769 "nbd_device": "/dev/nbd1", 00:11:12.769 "bdev_name": "Malloc1" 00:11:12.769 } 00:11:12.769 ]' 00:11:12.769 10:02:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:12.769 10:02:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:12.769 /dev/nbd1' 00:11:12.769 10:02:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:12.769 /dev/nbd1' 00:11:12.769 10:02:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:12.769 10:02:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:11:12.769 10:02:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:11:12.769 10:02:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:11:12.769 10:02:43 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:11:12.769 10:02:43 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:11:12.769 10:02:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:12.769 10:02:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:12.769 10:02:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:12.769 10:02:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:12.769 10:02:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:12.769 10:02:43 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:11:12.769 256+0 records in 00:11:12.769 256+0 records out 00:11:12.769 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00460831 s, 228 MB/s 00:11:12.769 10:02:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:12.769 10:02:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:12.769 256+0 records in 00:11:12.769 256+0 records out 00:11:12.769 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0406243 s, 25.8 MB/s 00:11:12.769 10:02:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:12.770 10:02:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:12.770 256+0 records in 00:11:12.770 256+0 records out 00:11:12.770 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0456694 s, 23.0 MB/s 00:11:12.770 10:02:43 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:11:12.770 10:02:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:12.770 10:02:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:12.770 10:02:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:12.770 10:02:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:12.770 10:02:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:12.770 10:02:43 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:12.770 10:02:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:12.770 10:02:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:11:12.770 10:02:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:12.770 10:02:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:11:12.770 10:02:43 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:12.770 10:02:43 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:11:12.770 10:02:43 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:12.770 10:02:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:12.770 10:02:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:12.770 10:02:43 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:11:12.770 10:02:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:12.770 10:02:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:13.027 10:02:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:13.285 10:02:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:13.285 10:02:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:13.285 10:02:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:13.285 10:02:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:13.285 10:02:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:13.285 10:02:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:13.285 10:02:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:13.285 10:02:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:13.285 10:02:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:13.543 10:02:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:13.543 10:02:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:13.543 10:02:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:13.543 10:02:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:13.543 10:02:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:13.543 10:02:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:13.543 10:02:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:13.543 10:02:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:13.543 10:02:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:13.543 10:02:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:13.543 10:02:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:13.805 10:02:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:13.806 10:02:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:13.806 10:02:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:13.806 10:02:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:13.806 10:02:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:11:13.806 10:02:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:13.806 10:02:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:11:13.806 10:02:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:11:13.806 10:02:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:11:13.806 10:02:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:11:13.806 10:02:44 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:13.806 10:02:44 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:11:13.806 10:02:44 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:11:14.376 10:02:44 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:11:15.310 [2024-12-09 10:02:46.065630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:15.568 [2024-12-09 10:02:46.195121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:15.568 [2024-12-09 10:02:46.195134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.826 [2024-12-09 10:02:46.389657] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:11:15.826 [2024-12-09 10:02:46.389789] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:11:17.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:17.201 10:02:47 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59610 /var/tmp/spdk-nbd.sock 00:11:17.201 10:02:47 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59610 ']' 00:11:17.201 10:02:47 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:17.201 10:02:47 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:17.201 10:02:47 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:17.201 10:02:47 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:17.201 10:02:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:17.460 10:02:48 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:17.460 10:02:48 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:11:17.460 10:02:48 event.app_repeat -- event/event.sh@39 -- # killprocess 59610 00:11:17.460 10:02:48 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 59610 ']' 00:11:17.460 10:02:48 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 59610 00:11:17.460 10:02:48 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:11:17.460 10:02:48 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:17.460 10:02:48 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59610 00:11:17.718 killing process with pid 59610 00:11:17.718 10:02:48 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:17.718 10:02:48 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:17.718 10:02:48 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59610' 00:11:17.718 10:02:48 event.app_repeat -- common/autotest_common.sh@973 -- # kill 59610 00:11:17.718 10:02:48 event.app_repeat -- common/autotest_common.sh@978 -- # wait 59610 00:11:18.652 spdk_app_start is called in Round 0. 00:11:18.652 Shutdown signal received, stop current app iteration 00:11:18.652 Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 reinitialization... 00:11:18.652 spdk_app_start is called in Round 1. 00:11:18.652 Shutdown signal received, stop current app iteration 00:11:18.652 Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 reinitialization... 00:11:18.652 spdk_app_start is called in Round 2. 00:11:18.652 Shutdown signal received, stop current app iteration 00:11:18.652 Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 reinitialization... 00:11:18.652 spdk_app_start is called in Round 3. 00:11:18.652 Shutdown signal received, stop current app iteration 00:11:18.652 10:02:49 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:11:18.652 10:02:49 event.app_repeat -- event/event.sh@42 -- # return 0 00:11:18.652 00:11:18.652 real 0m22.557s 00:11:18.652 user 0m49.982s 00:11:18.652 sys 0m3.279s 00:11:18.652 ************************************ 00:11:18.652 END TEST app_repeat 00:11:18.652 ************************************ 00:11:18.652 10:02:49 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:18.652 10:02:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:18.652 10:02:49 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:11:18.652 10:02:49 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:11:18.652 10:02:49 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:18.652 10:02:49 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:18.652 10:02:49 event -- common/autotest_common.sh@10 -- # set +x 00:11:18.652 ************************************ 00:11:18.652 START TEST cpu_locks 00:11:18.652 ************************************ 00:11:18.652 10:02:49 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:11:18.910 * Looking for test storage... 00:11:18.910 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:11:18.910 10:02:49 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:18.910 10:02:49 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:11:18.910 10:02:49 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:18.910 10:02:49 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:18.910 10:02:49 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:18.910 10:02:49 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:18.910 10:02:49 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:18.910 10:02:49 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:11:18.910 10:02:49 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:11:18.910 10:02:49 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:11:18.910 10:02:49 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:11:18.910 10:02:49 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:11:18.910 10:02:49 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:11:18.910 10:02:49 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:11:18.910 10:02:49 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:18.910 10:02:49 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:11:18.910 10:02:49 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:11:18.910 10:02:49 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:18.910 10:02:49 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:18.910 10:02:49 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:11:18.910 10:02:49 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:11:18.910 10:02:49 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:18.910 10:02:49 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:11:18.910 10:02:49 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:11:18.910 10:02:49 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:11:18.910 10:02:49 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:11:18.910 10:02:49 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:18.910 10:02:49 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:11:18.910 10:02:49 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:11:18.910 10:02:49 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:18.910 10:02:49 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:18.910 10:02:49 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:11:18.910 10:02:49 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:18.910 10:02:49 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:18.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.910 --rc genhtml_branch_coverage=1 00:11:18.910 --rc genhtml_function_coverage=1 00:11:18.910 --rc genhtml_legend=1 00:11:18.910 --rc geninfo_all_blocks=1 00:11:18.910 --rc geninfo_unexecuted_blocks=1 00:11:18.910 00:11:18.910 ' 00:11:18.910 10:02:49 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:18.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.910 --rc genhtml_branch_coverage=1 00:11:18.910 --rc genhtml_function_coverage=1 00:11:18.911 --rc genhtml_legend=1 00:11:18.911 --rc geninfo_all_blocks=1 00:11:18.911 --rc geninfo_unexecuted_blocks=1 00:11:18.911 00:11:18.911 ' 00:11:18.911 10:02:49 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:18.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.911 --rc genhtml_branch_coverage=1 00:11:18.911 --rc genhtml_function_coverage=1 00:11:18.911 --rc genhtml_legend=1 00:11:18.911 --rc geninfo_all_blocks=1 00:11:18.911 --rc geninfo_unexecuted_blocks=1 00:11:18.911 00:11:18.911 ' 00:11:18.911 10:02:49 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:18.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.911 --rc genhtml_branch_coverage=1 00:11:18.911 --rc genhtml_function_coverage=1 00:11:18.911 --rc genhtml_legend=1 00:11:18.911 --rc geninfo_all_blocks=1 00:11:18.911 --rc geninfo_unexecuted_blocks=1 00:11:18.911 00:11:18.911 ' 00:11:18.911 10:02:49 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:11:18.911 10:02:49 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:11:18.911 10:02:49 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:11:18.911 10:02:49 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:11:18.911 10:02:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:18.911 10:02:49 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:18.911 10:02:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:18.911 ************************************ 00:11:18.911 START TEST default_locks 00:11:18.911 ************************************ 00:11:18.911 10:02:49 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:11:18.911 10:02:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60101 00:11:18.911 10:02:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60101 00:11:18.911 10:02:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:18.911 10:02:49 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60101 ']' 00:11:18.911 10:02:49 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:18.911 10:02:49 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:18.911 10:02:49 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:18.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:18.911 10:02:49 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:18.911 10:02:49 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:19.170 [2024-12-09 10:02:49.739232] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:11:19.170 [2024-12-09 10:02:49.739692] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60101 ] 00:11:19.170 [2024-12-09 10:02:49.936065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:19.429 [2024-12-09 10:02:50.113474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.365 10:02:51 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:20.365 10:02:51 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:11:20.365 10:02:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60101 00:11:20.365 10:02:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60101 00:11:20.365 10:02:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:20.932 10:02:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60101 00:11:20.932 10:02:51 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 60101 ']' 00:11:20.932 10:02:51 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 60101 00:11:20.932 10:02:51 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:11:20.932 10:02:51 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:20.932 10:02:51 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60101 00:11:20.932 killing process with pid 60101 00:11:20.932 10:02:51 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:20.932 10:02:51 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:20.932 10:02:51 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60101' 00:11:20.932 10:02:51 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 60101 00:11:20.932 10:02:51 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 60101 00:11:23.464 10:02:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60101 00:11:23.464 10:02:53 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:11:23.464 10:02:53 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60101 00:11:23.464 10:02:53 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:11:23.464 10:02:53 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:23.464 10:02:53 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:11:23.464 10:02:53 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:23.464 10:02:53 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 60101 00:11:23.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:23.464 10:02:53 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60101 ']' 00:11:23.464 10:02:53 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:23.464 10:02:53 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:23.464 10:02:53 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:23.464 10:02:53 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:23.464 10:02:53 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:23.464 ERROR: process (pid: 60101) is no longer running 00:11:23.464 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60101) - No such process 00:11:23.464 10:02:53 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:23.464 ************************************ 00:11:23.464 END TEST default_locks 00:11:23.464 ************************************ 00:11:23.464 10:02:53 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:11:23.464 10:02:53 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:11:23.464 10:02:53 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:23.464 10:02:53 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:23.464 10:02:53 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:23.464 10:02:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:11:23.464 10:02:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:11:23.464 10:02:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:11:23.464 10:02:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:11:23.464 00:11:23.464 real 0m4.399s 00:11:23.464 user 0m4.359s 00:11:23.464 sys 0m0.864s 00:11:23.464 10:02:53 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:23.464 10:02:53 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:23.464 10:02:54 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:11:23.464 10:02:54 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:23.464 10:02:54 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:23.464 10:02:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:23.464 ************************************ 00:11:23.464 START TEST default_locks_via_rpc 00:11:23.464 ************************************ 00:11:23.464 10:02:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:11:23.464 10:02:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60177 00:11:23.464 10:02:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:23.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:23.464 10:02:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60177 00:11:23.464 10:02:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60177 ']' 00:11:23.464 10:02:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:23.464 10:02:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:23.464 10:02:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:23.464 10:02:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:23.464 10:02:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:23.464 [2024-12-09 10:02:54.172666] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:11:23.464 [2024-12-09 10:02:54.172855] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60177 ] 00:11:23.723 [2024-12-09 10:02:54.358213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.723 [2024-12-09 10:02:54.518026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.097 10:02:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:25.097 10:02:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:25.097 10:02:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:11:25.097 10:02:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.097 10:02:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:25.097 10:02:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.097 10:02:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:11:25.097 10:02:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:11:25.097 10:02:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:11:25.097 10:02:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:11:25.097 10:02:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:11:25.097 10:02:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.097 10:02:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:25.097 10:02:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.097 10:02:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60177 00:11:25.097 10:02:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60177 00:11:25.097 10:02:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:25.356 10:02:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60177 00:11:25.356 10:02:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 60177 ']' 00:11:25.356 10:02:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 60177 00:11:25.356 10:02:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:11:25.356 10:02:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:25.356 10:02:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60177 00:11:25.356 killing process with pid 60177 00:11:25.356 10:02:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:25.356 10:02:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:25.356 10:02:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60177' 00:11:25.356 10:02:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 60177 00:11:25.356 10:02:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 60177 00:11:27.887 ************************************ 00:11:27.887 END TEST default_locks_via_rpc 00:11:27.887 ************************************ 00:11:27.887 00:11:27.887 real 0m4.393s 00:11:27.887 user 0m4.480s 00:11:27.887 sys 0m0.774s 00:11:27.887 10:02:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:27.887 10:02:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:27.887 10:02:58 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:11:27.887 10:02:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:27.887 10:02:58 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:27.887 10:02:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:27.887 ************************************ 00:11:27.887 START TEST non_locking_app_on_locked_coremask 00:11:27.887 ************************************ 00:11:27.887 10:02:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:11:27.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:27.887 10:02:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60252 00:11:27.887 10:02:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60252 /var/tmp/spdk.sock 00:11:27.887 10:02:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:27.887 10:02:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60252 ']' 00:11:27.887 10:02:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:27.887 10:02:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:27.887 10:02:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:27.887 10:02:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:27.887 10:02:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:27.887 [2024-12-09 10:02:58.617422] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:11:27.887 [2024-12-09 10:02:58.617592] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60252 ] 00:11:28.145 [2024-12-09 10:02:58.790853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:28.145 [2024-12-09 10:02:58.922966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:29.080 10:02:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:29.080 10:02:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:11:29.080 10:02:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60274 00:11:29.080 10:02:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:11:29.080 10:02:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60274 /var/tmp/spdk2.sock 00:11:29.080 10:02:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60274 ']' 00:11:29.080 10:02:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:29.080 10:02:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:29.080 10:02:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:29.080 10:02:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:29.080 10:02:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:29.340 [2024-12-09 10:02:59.946869] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:11:29.340 [2024-12-09 10:02:59.948292] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60274 ] 00:11:29.605 [2024-12-09 10:03:00.162444] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:29.605 [2024-12-09 10:03:00.162513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:29.864 [2024-12-09 10:03:00.431735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.397 10:03:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:32.397 10:03:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:11:32.397 10:03:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60252 00:11:32.397 10:03:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60252 00:11:32.397 10:03:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:33.332 10:03:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60252 00:11:33.332 10:03:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60252 ']' 00:11:33.332 10:03:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60252 00:11:33.332 10:03:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:11:33.332 10:03:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:33.332 10:03:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60252 00:11:33.332 killing process with pid 60252 00:11:33.332 10:03:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:33.332 10:03:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:33.332 10:03:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60252' 00:11:33.332 10:03:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60252 00:11:33.332 10:03:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60252 00:11:38.624 10:03:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60274 00:11:38.624 10:03:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60274 ']' 00:11:38.624 10:03:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60274 00:11:38.624 10:03:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:11:38.624 10:03:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:38.624 10:03:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60274 00:11:38.624 killing process with pid 60274 00:11:38.624 10:03:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:38.624 10:03:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:38.624 10:03:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60274' 00:11:38.624 10:03:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60274 00:11:38.624 10:03:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60274 00:11:41.183 00:11:41.183 real 0m12.976s 00:11:41.183 user 0m13.533s 00:11:41.183 sys 0m1.712s 00:11:41.183 10:03:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:41.183 ************************************ 00:11:41.183 END TEST non_locking_app_on_locked_coremask 00:11:41.183 10:03:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:41.183 ************************************ 00:11:41.183 10:03:11 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:11:41.183 10:03:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:41.183 10:03:11 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:41.183 10:03:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:41.183 ************************************ 00:11:41.183 START TEST locking_app_on_unlocked_coremask 00:11:41.183 ************************************ 00:11:41.183 10:03:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:11:41.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:41.183 10:03:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60433 00:11:41.183 10:03:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60433 /var/tmp/spdk.sock 00:11:41.183 10:03:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:11:41.183 10:03:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60433 ']' 00:11:41.183 10:03:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:41.183 10:03:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:41.183 10:03:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:41.183 10:03:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:41.183 10:03:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:41.183 [2024-12-09 10:03:11.682800] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:11:41.183 [2024-12-09 10:03:11.683282] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60433 ] 00:11:41.183 [2024-12-09 10:03:11.875945] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:41.183 [2024-12-09 10:03:11.876285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:41.442 [2024-12-09 10:03:12.040973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.377 10:03:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:42.377 10:03:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:11:42.377 10:03:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60460 00:11:42.377 10:03:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:11:42.377 10:03:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60460 /var/tmp/spdk2.sock 00:11:42.377 10:03:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60460 ']' 00:11:42.377 10:03:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:42.377 10:03:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:42.377 10:03:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:42.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:42.377 10:03:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:42.377 10:03:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:42.635 [2024-12-09 10:03:13.246443] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:11:42.635 [2024-12-09 10:03:13.246642] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60460 ] 00:11:42.894 [2024-12-09 10:03:13.459461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:43.152 [2024-12-09 10:03:13.761797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.681 10:03:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:45.681 10:03:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:11:45.681 10:03:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60460 00:11:45.681 10:03:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60460 00:11:45.681 10:03:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:46.279 10:03:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60433 00:11:46.279 10:03:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60433 ']' 00:11:46.279 10:03:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60433 00:11:46.279 10:03:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:11:46.279 10:03:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:46.279 10:03:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60433 00:11:46.279 10:03:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:46.279 10:03:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:46.279 killing process with pid 60433 00:11:46.279 10:03:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60433' 00:11:46.279 10:03:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60433 00:11:46.279 10:03:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60433 00:11:51.593 10:03:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60460 00:11:51.593 10:03:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60460 ']' 00:11:51.593 10:03:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60460 00:11:51.593 10:03:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:11:51.593 10:03:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:51.593 10:03:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60460 00:11:51.593 killing process with pid 60460 00:11:51.593 10:03:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:51.593 10:03:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:51.593 10:03:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60460' 00:11:51.593 10:03:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60460 00:11:51.593 10:03:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60460 00:11:54.152 00:11:54.152 real 0m13.041s 00:11:54.152 user 0m13.553s 00:11:54.152 sys 0m1.759s 00:11:54.152 10:03:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:54.152 10:03:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:54.152 ************************************ 00:11:54.152 END TEST locking_app_on_unlocked_coremask 00:11:54.152 ************************************ 00:11:54.152 10:03:24 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:11:54.152 10:03:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:54.152 10:03:24 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:54.152 10:03:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:54.152 ************************************ 00:11:54.152 START TEST locking_app_on_locked_coremask 00:11:54.152 ************************************ 00:11:54.152 10:03:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:11:54.152 10:03:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60619 00:11:54.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:54.152 10:03:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60619 /var/tmp/spdk.sock 00:11:54.152 10:03:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60619 ']' 00:11:54.152 10:03:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:54.152 10:03:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:54.152 10:03:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:54.152 10:03:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:54.152 10:03:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:54.152 10:03:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:54.152 [2024-12-09 10:03:24.773784] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:11:54.152 [2024-12-09 10:03:24.774294] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60619 ] 00:11:54.410 [2024-12-09 10:03:24.972008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:54.410 [2024-12-09 10:03:25.140248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.346 10:03:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:55.346 10:03:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:11:55.346 10:03:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60635 00:11:55.346 10:03:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60635 /var/tmp/spdk2.sock 00:11:55.346 10:03:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:11:55.346 10:03:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:11:55.346 10:03:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60635 /var/tmp/spdk2.sock 00:11:55.346 10:03:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:11:55.346 10:03:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:55.346 10:03:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:11:55.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:55.346 10:03:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:55.346 10:03:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60635 /var/tmp/spdk2.sock 00:11:55.346 10:03:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60635 ']' 00:11:55.346 10:03:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:55.346 10:03:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:55.346 10:03:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:55.346 10:03:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:55.346 10:03:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:55.605 [2024-12-09 10:03:26.246948] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:11:55.605 [2024-12-09 10:03:26.247188] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60635 ] 00:11:55.864 [2024-12-09 10:03:26.455015] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60619 has claimed it. 00:11:55.864 [2024-12-09 10:03:26.455125] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:11:56.429 ERROR: process (pid: 60635) is no longer running 00:11:56.429 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60635) - No such process 00:11:56.430 10:03:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:56.430 10:03:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:11:56.430 10:03:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:11:56.430 10:03:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:56.430 10:03:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:56.430 10:03:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:56.430 10:03:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60619 00:11:56.430 10:03:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60619 00:11:56.430 10:03:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:57.036 10:03:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60619 00:11:57.036 10:03:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60619 ']' 00:11:57.036 10:03:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60619 00:11:57.036 10:03:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:11:57.036 10:03:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:57.036 10:03:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60619 00:11:57.036 killing process with pid 60619 00:11:57.036 10:03:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:57.036 10:03:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:57.036 10:03:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60619' 00:11:57.036 10:03:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60619 00:11:57.036 10:03:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60619 00:11:59.566 00:11:59.566 real 0m5.649s 00:11:59.566 user 0m5.970s 00:11:59.566 sys 0m1.017s 00:11:59.566 10:03:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:59.566 10:03:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:59.566 ************************************ 00:11:59.566 END TEST locking_app_on_locked_coremask 00:11:59.566 ************************************ 00:11:59.566 10:03:30 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:11:59.566 10:03:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:59.566 10:03:30 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:59.566 10:03:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:59.566 ************************************ 00:11:59.566 START TEST locking_overlapped_coremask 00:11:59.566 ************************************ 00:11:59.566 10:03:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:11:59.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:59.566 10:03:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60710 00:11:59.566 10:03:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60710 /var/tmp/spdk.sock 00:11:59.566 10:03:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:11:59.566 10:03:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60710 ']' 00:11:59.566 10:03:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:59.566 10:03:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:59.566 10:03:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:59.566 10:03:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:59.566 10:03:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:59.825 [2024-12-09 10:03:30.479097] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:11:59.825 [2024-12-09 10:03:30.479319] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60710 ] 00:12:00.084 [2024-12-09 10:03:30.677619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:00.084 [2024-12-09 10:03:30.848348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:00.084 [2024-12-09 10:03:30.848563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.084 [2024-12-09 10:03:30.848602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:01.021 10:03:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:01.021 10:03:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:12:01.021 10:03:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60738 00:12:01.021 10:03:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60738 /var/tmp/spdk2.sock 00:12:01.021 10:03:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:12:01.021 10:03:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:12:01.021 10:03:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60738 /var/tmp/spdk2.sock 00:12:01.021 10:03:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:12:01.021 10:03:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:01.021 10:03:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:12:01.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:01.021 10:03:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:01.021 10:03:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60738 /var/tmp/spdk2.sock 00:12:01.021 10:03:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60738 ']' 00:12:01.021 10:03:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:01.021 10:03:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:01.021 10:03:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:01.021 10:03:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:01.021 10:03:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:01.302 [2024-12-09 10:03:31.938845] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:12:01.302 [2024-12-09 10:03:31.939355] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60738 ] 00:12:01.560 [2024-12-09 10:03:32.149780] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60710 has claimed it. 00:12:01.561 [2024-12-09 10:03:32.149901] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:12:01.819 ERROR: process (pid: 60738) is no longer running 00:12:01.819 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60738) - No such process 00:12:01.819 10:03:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:01.819 10:03:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:12:01.819 10:03:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:12:01.819 10:03:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:01.819 10:03:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:01.819 10:03:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:01.819 10:03:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:12:01.819 10:03:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:12:01.819 10:03:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:12:01.819 10:03:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:12:01.819 10:03:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60710 00:12:01.820 10:03:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 60710 ']' 00:12:01.820 10:03:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 60710 00:12:01.820 10:03:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:12:01.820 10:03:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:01.820 10:03:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60710 00:12:01.820 10:03:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:01.820 10:03:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:01.820 10:03:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60710' 00:12:01.820 killing process with pid 60710 00:12:01.820 10:03:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 60710 00:12:01.820 10:03:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 60710 00:12:04.354 00:12:04.354 real 0m4.720s 00:12:04.354 user 0m12.530s 00:12:04.354 sys 0m0.829s 00:12:04.354 10:03:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:04.354 10:03:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:04.354 ************************************ 00:12:04.354 END TEST locking_overlapped_coremask 00:12:04.354 ************************************ 00:12:04.354 10:03:35 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:12:04.354 10:03:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:04.354 10:03:35 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:04.354 10:03:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:04.354 ************************************ 00:12:04.354 START TEST locking_overlapped_coremask_via_rpc 00:12:04.354 ************************************ 00:12:04.354 10:03:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:12:04.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:04.354 10:03:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60803 00:12:04.354 10:03:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60803 /var/tmp/spdk.sock 00:12:04.354 10:03:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60803 ']' 00:12:04.354 10:03:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:12:04.354 10:03:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:04.354 10:03:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:04.354 10:03:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:04.354 10:03:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:04.354 10:03:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.612 [2024-12-09 10:03:35.254256] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:12:04.612 [2024-12-09 10:03:35.254467] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60803 ] 00:12:04.869 [2024-12-09 10:03:35.457085] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:12:04.869 [2024-12-09 10:03:35.457176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:04.869 [2024-12-09 10:03:35.603141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:04.869 [2024-12-09 10:03:35.603322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.869 [2024-12-09 10:03:35.603329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:05.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:05.805 10:03:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:05.805 10:03:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:05.805 10:03:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60821 00:12:05.805 10:03:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60821 /var/tmp/spdk2.sock 00:12:05.805 10:03:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:12:05.805 10:03:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60821 ']' 00:12:05.805 10:03:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:05.805 10:03:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:05.805 10:03:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:05.805 10:03:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:05.805 10:03:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.064 [2024-12-09 10:03:36.720105] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:12:06.064 [2024-12-09 10:03:36.720641] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60821 ] 00:12:06.322 [2024-12-09 10:03:36.929789] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:12:06.322 [2024-12-09 10:03:36.929924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:06.594 [2024-12-09 10:03:37.250868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:06.594 [2024-12-09 10:03:37.253981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:06.594 [2024-12-09 10:03:37.254002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:12:09.150 10:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:09.150 10:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:09.150 10:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:12:09.150 10:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.150 10:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:09.150 10:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.150 10:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:12:09.150 10:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:09.150 10:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:12:09.150 10:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:09.150 10:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:09.150 10:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:09.150 10:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:09.150 10:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:12:09.150 10:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.150 10:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:09.150 [2024-12-09 10:03:39.525164] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60803 has claimed it. 00:12:09.150 request: 00:12:09.150 { 00:12:09.150 "method": "framework_enable_cpumask_locks", 00:12:09.150 "req_id": 1 00:12:09.150 } 00:12:09.150 Got JSON-RPC error response 00:12:09.150 response: 00:12:09.150 { 00:12:09.150 "code": -32603, 00:12:09.150 "message": "Failed to claim CPU core: 2" 00:12:09.150 } 00:12:09.150 10:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:09.150 10:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:09.150 10:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:09.150 10:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:09.150 10:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:09.150 10:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60803 /var/tmp/spdk.sock 00:12:09.150 10:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60803 ']' 00:12:09.150 10:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:09.150 10:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:09.150 10:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:09.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:09.150 10:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:09.150 10:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:09.150 10:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:09.150 10:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:09.150 10:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60821 /var/tmp/spdk2.sock 00:12:09.150 10:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60821 ']' 00:12:09.150 10:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:09.150 10:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:09.150 10:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:09.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:09.150 10:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:09.150 10:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:09.717 ************************************ 00:12:09.717 END TEST locking_overlapped_coremask_via_rpc 00:12:09.717 ************************************ 00:12:09.717 10:03:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:09.717 10:03:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:09.718 10:03:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:12:09.718 10:03:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:12:09.718 10:03:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:12:09.718 10:03:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:12:09.718 00:12:09.718 real 0m5.104s 00:12:09.718 user 0m1.969s 00:12:09.718 sys 0m0.266s 00:12:09.718 10:03:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:09.718 10:03:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:09.718 10:03:40 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:12:09.718 10:03:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60803 ]] 00:12:09.718 10:03:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60803 00:12:09.718 10:03:40 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60803 ']' 00:12:09.718 10:03:40 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60803 00:12:09.718 10:03:40 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:12:09.718 10:03:40 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:09.718 10:03:40 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60803 00:12:09.718 killing process with pid 60803 00:12:09.718 10:03:40 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:09.718 10:03:40 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:09.718 10:03:40 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60803' 00:12:09.718 10:03:40 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60803 00:12:09.718 10:03:40 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60803 00:12:12.253 10:03:42 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60821 ]] 00:12:12.253 10:03:42 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60821 00:12:12.253 10:03:42 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60821 ']' 00:12:12.253 10:03:42 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60821 00:12:12.253 10:03:42 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:12:12.253 10:03:42 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:12.253 10:03:42 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60821 00:12:12.253 killing process with pid 60821 00:12:12.253 10:03:42 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:12:12.253 10:03:42 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:12:12.253 10:03:42 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60821' 00:12:12.253 10:03:42 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60821 00:12:12.253 10:03:42 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60821 00:12:14.785 10:03:45 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:12:14.785 10:03:45 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:12:14.785 10:03:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60803 ]] 00:12:14.785 10:03:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60803 00:12:14.785 10:03:45 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60803 ']' 00:12:14.785 10:03:45 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60803 00:12:14.785 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60803) - No such process 00:12:14.785 Process with pid 60803 is not found 00:12:14.785 Process with pid 60821 is not found 00:12:14.785 10:03:45 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60803 is not found' 00:12:14.785 10:03:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60821 ]] 00:12:14.785 10:03:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60821 00:12:14.785 10:03:45 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60821 ']' 00:12:14.785 10:03:45 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60821 00:12:14.785 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60821) - No such process 00:12:14.785 10:03:45 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60821 is not found' 00:12:14.785 10:03:45 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:12:14.785 ************************************ 00:12:14.785 END TEST cpu_locks 00:12:14.785 ************************************ 00:12:14.785 00:12:14.785 real 0m55.872s 00:12:14.785 user 1m34.656s 00:12:14.785 sys 0m8.663s 00:12:14.785 10:03:45 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:14.785 10:03:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:14.785 ************************************ 00:12:14.785 END TEST event 00:12:14.785 ************************************ 00:12:14.785 00:12:14.785 real 1m30.353s 00:12:14.785 user 2m43.337s 00:12:14.785 sys 0m13.209s 00:12:14.785 10:03:45 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:14.785 10:03:45 event -- common/autotest_common.sh@10 -- # set +x 00:12:14.785 10:03:45 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:12:14.785 10:03:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:14.785 10:03:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:14.785 10:03:45 -- common/autotest_common.sh@10 -- # set +x 00:12:14.785 ************************************ 00:12:14.785 START TEST thread 00:12:14.785 ************************************ 00:12:14.785 10:03:45 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:12:14.785 * Looking for test storage... 00:12:14.785 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:12:14.785 10:03:45 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:14.785 10:03:45 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:14.785 10:03:45 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:12:14.785 10:03:45 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:14.785 10:03:45 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:14.785 10:03:45 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:14.785 10:03:45 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:14.785 10:03:45 thread -- scripts/common.sh@336 -- # IFS=.-: 00:12:14.785 10:03:45 thread -- scripts/common.sh@336 -- # read -ra ver1 00:12:14.785 10:03:45 thread -- scripts/common.sh@337 -- # IFS=.-: 00:12:14.785 10:03:45 thread -- scripts/common.sh@337 -- # read -ra ver2 00:12:14.785 10:03:45 thread -- scripts/common.sh@338 -- # local 'op=<' 00:12:14.785 10:03:45 thread -- scripts/common.sh@340 -- # ver1_l=2 00:12:14.785 10:03:45 thread -- scripts/common.sh@341 -- # ver2_l=1 00:12:14.785 10:03:45 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:14.785 10:03:45 thread -- scripts/common.sh@344 -- # case "$op" in 00:12:14.785 10:03:45 thread -- scripts/common.sh@345 -- # : 1 00:12:14.785 10:03:45 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:14.785 10:03:45 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:14.785 10:03:45 thread -- scripts/common.sh@365 -- # decimal 1 00:12:14.785 10:03:45 thread -- scripts/common.sh@353 -- # local d=1 00:12:14.785 10:03:45 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:14.785 10:03:45 thread -- scripts/common.sh@355 -- # echo 1 00:12:14.785 10:03:45 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:12:14.785 10:03:45 thread -- scripts/common.sh@366 -- # decimal 2 00:12:14.785 10:03:45 thread -- scripts/common.sh@353 -- # local d=2 00:12:14.786 10:03:45 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:14.786 10:03:45 thread -- scripts/common.sh@355 -- # echo 2 00:12:14.786 10:03:45 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:12:14.786 10:03:45 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:14.786 10:03:45 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:14.786 10:03:45 thread -- scripts/common.sh@368 -- # return 0 00:12:14.786 10:03:45 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:14.786 10:03:45 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:14.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.786 --rc genhtml_branch_coverage=1 00:12:14.786 --rc genhtml_function_coverage=1 00:12:14.786 --rc genhtml_legend=1 00:12:14.786 --rc geninfo_all_blocks=1 00:12:14.786 --rc geninfo_unexecuted_blocks=1 00:12:14.786 00:12:14.786 ' 00:12:14.786 10:03:45 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:14.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.786 --rc genhtml_branch_coverage=1 00:12:14.786 --rc genhtml_function_coverage=1 00:12:14.786 --rc genhtml_legend=1 00:12:14.786 --rc geninfo_all_blocks=1 00:12:14.786 --rc geninfo_unexecuted_blocks=1 00:12:14.786 00:12:14.786 ' 00:12:14.786 10:03:45 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:14.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.786 --rc genhtml_branch_coverage=1 00:12:14.786 --rc genhtml_function_coverage=1 00:12:14.786 --rc genhtml_legend=1 00:12:14.786 --rc geninfo_all_blocks=1 00:12:14.786 --rc geninfo_unexecuted_blocks=1 00:12:14.786 00:12:14.786 ' 00:12:14.786 10:03:45 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:14.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.786 --rc genhtml_branch_coverage=1 00:12:14.786 --rc genhtml_function_coverage=1 00:12:14.786 --rc genhtml_legend=1 00:12:14.786 --rc geninfo_all_blocks=1 00:12:14.786 --rc geninfo_unexecuted_blocks=1 00:12:14.786 00:12:14.786 ' 00:12:14.786 10:03:45 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:12:14.786 10:03:45 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:12:14.786 10:03:45 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:14.786 10:03:45 thread -- common/autotest_common.sh@10 -- # set +x 00:12:14.786 ************************************ 00:12:14.786 START TEST thread_poller_perf 00:12:14.786 ************************************ 00:12:14.786 10:03:45 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:12:15.044 [2024-12-09 10:03:45.615417] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:12:15.044 [2024-12-09 10:03:45.615634] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61026 ] 00:12:15.044 [2024-12-09 10:03:45.815343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:15.302 [2024-12-09 10:03:45.997158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.302 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:12:16.707 [2024-12-09T10:03:47.504Z] ====================================== 00:12:16.707 [2024-12-09T10:03:47.504Z] busy:2210566053 (cyc) 00:12:16.707 [2024-12-09T10:03:47.504Z] total_run_count: 285000 00:12:16.707 [2024-12-09T10:03:47.504Z] tsc_hz: 2200000000 (cyc) 00:12:16.707 [2024-12-09T10:03:47.504Z] ====================================== 00:12:16.707 [2024-12-09T10:03:47.504Z] poller_cost: 7756 (cyc), 3525 (nsec) 00:12:16.707 00:12:16.707 real 0m1.784s 00:12:16.707 user 0m1.557s 00:12:16.707 sys 0m0.115s 00:12:16.707 10:03:47 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:16.707 ************************************ 00:12:16.707 END TEST thread_poller_perf 00:12:16.707 ************************************ 00:12:16.707 10:03:47 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:12:16.707 10:03:47 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:12:16.707 10:03:47 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:12:16.707 10:03:47 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:16.707 10:03:47 thread -- common/autotest_common.sh@10 -- # set +x 00:12:16.707 ************************************ 00:12:16.707 START TEST thread_poller_perf 00:12:16.707 ************************************ 00:12:16.707 10:03:47 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:12:16.707 [2024-12-09 10:03:47.452805] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:12:16.707 [2024-12-09 10:03:47.453007] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61064 ] 00:12:16.966 [2024-12-09 10:03:47.649337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:17.224 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:12:17.224 [2024-12-09 10:03:47.804379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.600 [2024-12-09T10:03:49.398Z] ====================================== 00:12:18.601 [2024-12-09T10:03:49.398Z] busy:2203930074 (cyc) 00:12:18.601 [2024-12-09T10:03:49.398Z] total_run_count: 3496000 00:12:18.601 [2024-12-09T10:03:49.398Z] tsc_hz: 2200000000 (cyc) 00:12:18.601 [2024-12-09T10:03:49.398Z] ====================================== 00:12:18.601 [2024-12-09T10:03:49.398Z] poller_cost: 630 (cyc), 286 (nsec) 00:12:18.601 00:12:18.601 real 0m1.755s 00:12:18.601 user 0m1.522s 00:12:18.601 sys 0m0.121s 00:12:18.601 10:03:49 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:18.601 ************************************ 00:12:18.601 END TEST thread_poller_perf 00:12:18.601 ************************************ 00:12:18.601 10:03:49 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:12:18.601 10:03:49 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:12:18.601 00:12:18.601 real 0m3.832s 00:12:18.601 user 0m3.234s 00:12:18.601 sys 0m0.378s 00:12:18.601 10:03:49 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:18.601 ************************************ 00:12:18.601 END TEST thread 00:12:18.601 ************************************ 00:12:18.601 10:03:49 thread -- common/autotest_common.sh@10 -- # set +x 00:12:18.601 10:03:49 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:12:18.601 10:03:49 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:12:18.601 10:03:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:18.601 10:03:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:18.601 10:03:49 -- common/autotest_common.sh@10 -- # set +x 00:12:18.601 ************************************ 00:12:18.601 START TEST app_cmdline 00:12:18.601 ************************************ 00:12:18.601 10:03:49 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:12:18.601 * Looking for test storage... 00:12:18.601 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:12:18.601 10:03:49 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:18.601 10:03:49 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:18.601 10:03:49 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:12:18.859 10:03:49 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:18.859 10:03:49 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:18.859 10:03:49 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:18.859 10:03:49 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:18.859 10:03:49 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:12:18.860 10:03:49 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:12:18.860 10:03:49 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:12:18.860 10:03:49 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:12:18.860 10:03:49 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:12:18.860 10:03:49 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:12:18.860 10:03:49 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:12:18.860 10:03:49 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:18.860 10:03:49 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:12:18.860 10:03:49 app_cmdline -- scripts/common.sh@345 -- # : 1 00:12:18.860 10:03:49 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:18.860 10:03:49 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:18.860 10:03:49 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:12:18.860 10:03:49 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:12:18.860 10:03:49 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:18.860 10:03:49 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:12:18.860 10:03:49 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:12:18.860 10:03:49 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:12:18.860 10:03:49 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:12:18.860 10:03:49 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:18.860 10:03:49 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:12:18.860 10:03:49 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:12:18.860 10:03:49 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:18.860 10:03:49 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:18.860 10:03:49 app_cmdline -- scripts/common.sh@368 -- # return 0 00:12:18.860 10:03:49 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:18.860 10:03:49 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:18.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.860 --rc genhtml_branch_coverage=1 00:12:18.860 --rc genhtml_function_coverage=1 00:12:18.860 --rc genhtml_legend=1 00:12:18.860 --rc geninfo_all_blocks=1 00:12:18.860 --rc geninfo_unexecuted_blocks=1 00:12:18.860 00:12:18.860 ' 00:12:18.860 10:03:49 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:18.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.860 --rc genhtml_branch_coverage=1 00:12:18.860 --rc genhtml_function_coverage=1 00:12:18.860 --rc genhtml_legend=1 00:12:18.860 --rc geninfo_all_blocks=1 00:12:18.860 --rc geninfo_unexecuted_blocks=1 00:12:18.860 00:12:18.860 ' 00:12:18.860 10:03:49 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:18.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.860 --rc genhtml_branch_coverage=1 00:12:18.860 --rc genhtml_function_coverage=1 00:12:18.860 --rc genhtml_legend=1 00:12:18.860 --rc geninfo_all_blocks=1 00:12:18.860 --rc geninfo_unexecuted_blocks=1 00:12:18.860 00:12:18.860 ' 00:12:18.860 10:03:49 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:18.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.860 --rc genhtml_branch_coverage=1 00:12:18.860 --rc genhtml_function_coverage=1 00:12:18.860 --rc genhtml_legend=1 00:12:18.860 --rc geninfo_all_blocks=1 00:12:18.860 --rc geninfo_unexecuted_blocks=1 00:12:18.860 00:12:18.860 ' 00:12:18.860 10:03:49 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:12:18.860 10:03:49 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=61153 00:12:18.860 10:03:49 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 61153 00:12:18.860 10:03:49 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:12:18.860 10:03:49 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 61153 ']' 00:12:18.860 10:03:49 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:18.860 10:03:49 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:18.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:18.860 10:03:49 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:18.860 10:03:49 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:18.860 10:03:49 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:12:18.860 [2024-12-09 10:03:49.557691] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:12:18.860 [2024-12-09 10:03:49.557877] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61153 ] 00:12:19.119 [2024-12-09 10:03:49.736485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:19.119 [2024-12-09 10:03:49.891136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.496 10:03:50 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:20.496 10:03:50 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:12:20.496 10:03:50 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:12:20.496 { 00:12:20.496 "version": "SPDK v25.01-pre git sha1 b4f857a04", 00:12:20.496 "fields": { 00:12:20.496 "major": 25, 00:12:20.496 "minor": 1, 00:12:20.496 "patch": 0, 00:12:20.496 "suffix": "-pre", 00:12:20.496 "commit": "b4f857a04" 00:12:20.496 } 00:12:20.496 } 00:12:20.496 10:03:51 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:12:20.496 10:03:51 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:12:20.496 10:03:51 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:12:20.496 10:03:51 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:12:20.496 10:03:51 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:12:20.496 10:03:51 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:12:20.496 10:03:51 app_cmdline -- app/cmdline.sh@26 -- # sort 00:12:20.496 10:03:51 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.496 10:03:51 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:12:20.496 10:03:51 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.496 10:03:51 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:12:20.496 10:03:51 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:12:20.496 10:03:51 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:20.496 10:03:51 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:12:20.496 10:03:51 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:20.496 10:03:51 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:20.496 10:03:51 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:20.496 10:03:51 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:20.496 10:03:51 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:20.496 10:03:51 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:20.496 10:03:51 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:20.496 10:03:51 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:20.496 10:03:51 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:12:20.496 10:03:51 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:20.755 request: 00:12:20.755 { 00:12:20.755 "method": "env_dpdk_get_mem_stats", 00:12:20.755 "req_id": 1 00:12:20.755 } 00:12:20.755 Got JSON-RPC error response 00:12:20.755 response: 00:12:20.755 { 00:12:20.755 "code": -32601, 00:12:20.755 "message": "Method not found" 00:12:20.755 } 00:12:20.755 10:03:51 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:12:20.755 10:03:51 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:20.755 10:03:51 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:20.755 10:03:51 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:20.755 10:03:51 app_cmdline -- app/cmdline.sh@1 -- # killprocess 61153 00:12:20.755 10:03:51 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 61153 ']' 00:12:20.755 10:03:51 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 61153 00:12:20.755 10:03:51 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:12:20.755 10:03:51 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:20.755 10:03:51 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61153 00:12:20.755 10:03:51 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:20.755 killing process with pid 61153 00:12:20.755 10:03:51 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:20.755 10:03:51 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61153' 00:12:20.755 10:03:51 app_cmdline -- common/autotest_common.sh@973 -- # kill 61153 00:12:20.755 10:03:51 app_cmdline -- common/autotest_common.sh@978 -- # wait 61153 00:12:24.040 00:12:24.040 real 0m5.028s 00:12:24.040 user 0m5.414s 00:12:24.040 sys 0m0.732s 00:12:24.040 ************************************ 00:12:24.040 END TEST app_cmdline 00:12:24.040 ************************************ 00:12:24.040 10:03:54 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:24.040 10:03:54 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:12:24.040 10:03:54 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:12:24.040 10:03:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:24.040 10:03:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:24.040 10:03:54 -- common/autotest_common.sh@10 -- # set +x 00:12:24.040 ************************************ 00:12:24.040 START TEST version 00:12:24.040 ************************************ 00:12:24.040 10:03:54 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:12:24.040 * Looking for test storage... 00:12:24.040 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:12:24.040 10:03:54 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:24.040 10:03:54 version -- common/autotest_common.sh@1711 -- # lcov --version 00:12:24.040 10:03:54 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:24.040 10:03:54 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:24.040 10:03:54 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:24.040 10:03:54 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:24.040 10:03:54 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:24.040 10:03:54 version -- scripts/common.sh@336 -- # IFS=.-: 00:12:24.040 10:03:54 version -- scripts/common.sh@336 -- # read -ra ver1 00:12:24.040 10:03:54 version -- scripts/common.sh@337 -- # IFS=.-: 00:12:24.040 10:03:54 version -- scripts/common.sh@337 -- # read -ra ver2 00:12:24.040 10:03:54 version -- scripts/common.sh@338 -- # local 'op=<' 00:12:24.040 10:03:54 version -- scripts/common.sh@340 -- # ver1_l=2 00:12:24.040 10:03:54 version -- scripts/common.sh@341 -- # ver2_l=1 00:12:24.040 10:03:54 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:24.040 10:03:54 version -- scripts/common.sh@344 -- # case "$op" in 00:12:24.040 10:03:54 version -- scripts/common.sh@345 -- # : 1 00:12:24.040 10:03:54 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:24.040 10:03:54 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:24.040 10:03:54 version -- scripts/common.sh@365 -- # decimal 1 00:12:24.040 10:03:54 version -- scripts/common.sh@353 -- # local d=1 00:12:24.040 10:03:54 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:24.040 10:03:54 version -- scripts/common.sh@355 -- # echo 1 00:12:24.040 10:03:54 version -- scripts/common.sh@365 -- # ver1[v]=1 00:12:24.040 10:03:54 version -- scripts/common.sh@366 -- # decimal 2 00:12:24.040 10:03:54 version -- scripts/common.sh@353 -- # local d=2 00:12:24.040 10:03:54 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:24.040 10:03:54 version -- scripts/common.sh@355 -- # echo 2 00:12:24.040 10:03:54 version -- scripts/common.sh@366 -- # ver2[v]=2 00:12:24.040 10:03:54 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:24.040 10:03:54 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:24.040 10:03:54 version -- scripts/common.sh@368 -- # return 0 00:12:24.040 10:03:54 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:24.040 10:03:54 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:24.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.040 --rc genhtml_branch_coverage=1 00:12:24.040 --rc genhtml_function_coverage=1 00:12:24.040 --rc genhtml_legend=1 00:12:24.040 --rc geninfo_all_blocks=1 00:12:24.040 --rc geninfo_unexecuted_blocks=1 00:12:24.040 00:12:24.040 ' 00:12:24.040 10:03:54 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:24.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.040 --rc genhtml_branch_coverage=1 00:12:24.040 --rc genhtml_function_coverage=1 00:12:24.040 --rc genhtml_legend=1 00:12:24.040 --rc geninfo_all_blocks=1 00:12:24.040 --rc geninfo_unexecuted_blocks=1 00:12:24.040 00:12:24.040 ' 00:12:24.040 10:03:54 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:24.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.040 --rc genhtml_branch_coverage=1 00:12:24.041 --rc genhtml_function_coverage=1 00:12:24.041 --rc genhtml_legend=1 00:12:24.041 --rc geninfo_all_blocks=1 00:12:24.041 --rc geninfo_unexecuted_blocks=1 00:12:24.041 00:12:24.041 ' 00:12:24.041 10:03:54 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:24.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.041 --rc genhtml_branch_coverage=1 00:12:24.041 --rc genhtml_function_coverage=1 00:12:24.041 --rc genhtml_legend=1 00:12:24.041 --rc geninfo_all_blocks=1 00:12:24.041 --rc geninfo_unexecuted_blocks=1 00:12:24.041 00:12:24.041 ' 00:12:24.041 10:03:54 version -- app/version.sh@17 -- # get_header_version major 00:12:24.041 10:03:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:24.041 10:03:54 version -- app/version.sh@14 -- # cut -f2 00:12:24.041 10:03:54 version -- app/version.sh@14 -- # tr -d '"' 00:12:24.041 10:03:54 version -- app/version.sh@17 -- # major=25 00:12:24.041 10:03:54 version -- app/version.sh@18 -- # get_header_version minor 00:12:24.041 10:03:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:24.041 10:03:54 version -- app/version.sh@14 -- # cut -f2 00:12:24.041 10:03:54 version -- app/version.sh@14 -- # tr -d '"' 00:12:24.041 10:03:54 version -- app/version.sh@18 -- # minor=1 00:12:24.041 10:03:54 version -- app/version.sh@19 -- # get_header_version patch 00:12:24.041 10:03:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:24.041 10:03:54 version -- app/version.sh@14 -- # cut -f2 00:12:24.041 10:03:54 version -- app/version.sh@14 -- # tr -d '"' 00:12:24.041 10:03:54 version -- app/version.sh@19 -- # patch=0 00:12:24.041 10:03:54 version -- app/version.sh@20 -- # get_header_version suffix 00:12:24.041 10:03:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:24.041 10:03:54 version -- app/version.sh@14 -- # cut -f2 00:12:24.041 10:03:54 version -- app/version.sh@14 -- # tr -d '"' 00:12:24.041 10:03:54 version -- app/version.sh@20 -- # suffix=-pre 00:12:24.041 10:03:54 version -- app/version.sh@22 -- # version=25.1 00:12:24.041 10:03:54 version -- app/version.sh@25 -- # (( patch != 0 )) 00:12:24.041 10:03:54 version -- app/version.sh@28 -- # version=25.1rc0 00:12:24.041 10:03:54 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:24.041 10:03:54 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:12:24.041 10:03:54 version -- app/version.sh@30 -- # py_version=25.1rc0 00:12:24.041 10:03:54 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:12:24.041 00:12:24.041 real 0m0.248s 00:12:24.041 user 0m0.166s 00:12:24.041 sys 0m0.121s 00:12:24.041 10:03:54 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:24.041 10:03:54 version -- common/autotest_common.sh@10 -- # set +x 00:12:24.041 ************************************ 00:12:24.041 END TEST version 00:12:24.041 ************************************ 00:12:24.041 10:03:54 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:12:24.041 10:03:54 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:12:24.041 10:03:54 -- spdk/autotest.sh@194 -- # uname -s 00:12:24.041 10:03:54 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:12:24.041 10:03:54 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:12:24.041 10:03:54 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:12:24.041 10:03:54 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:12:24.041 10:03:54 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:12:24.041 10:03:54 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:24.041 10:03:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:24.041 10:03:54 -- common/autotest_common.sh@10 -- # set +x 00:12:24.041 ************************************ 00:12:24.041 START TEST blockdev_nvme 00:12:24.041 ************************************ 00:12:24.041 10:03:54 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:12:24.041 * Looking for test storage... 00:12:24.041 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:24.041 10:03:54 blockdev_nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:24.041 10:03:54 blockdev_nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:12:24.041 10:03:54 blockdev_nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:24.041 10:03:54 blockdev_nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:24.041 10:03:54 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:24.041 10:03:54 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:24.041 10:03:54 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:24.041 10:03:54 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:24.041 10:03:54 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:24.041 10:03:54 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:24.041 10:03:54 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:24.041 10:03:54 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:24.041 10:03:54 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:24.041 10:03:54 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:24.041 10:03:54 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:24.041 10:03:54 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:12:24.041 10:03:54 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:12:24.041 10:03:54 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:24.041 10:03:54 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:24.041 10:03:54 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:12:24.041 10:03:54 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:12:24.041 10:03:54 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:24.041 10:03:54 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:12:24.041 10:03:54 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:24.041 10:03:54 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:12:24.041 10:03:54 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:12:24.041 10:03:54 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:24.041 10:03:54 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:12:24.041 10:03:54 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:24.041 10:03:54 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:24.041 10:03:54 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:24.041 10:03:54 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:12:24.041 10:03:54 blockdev_nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:24.041 10:03:54 blockdev_nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:24.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.041 --rc genhtml_branch_coverage=1 00:12:24.041 --rc genhtml_function_coverage=1 00:12:24.041 --rc genhtml_legend=1 00:12:24.041 --rc geninfo_all_blocks=1 00:12:24.041 --rc geninfo_unexecuted_blocks=1 00:12:24.041 00:12:24.041 ' 00:12:24.041 10:03:54 blockdev_nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:24.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.041 --rc genhtml_branch_coverage=1 00:12:24.041 --rc genhtml_function_coverage=1 00:12:24.041 --rc genhtml_legend=1 00:12:24.041 --rc geninfo_all_blocks=1 00:12:24.041 --rc geninfo_unexecuted_blocks=1 00:12:24.041 00:12:24.041 ' 00:12:24.041 10:03:54 blockdev_nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:24.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.041 --rc genhtml_branch_coverage=1 00:12:24.041 --rc genhtml_function_coverage=1 00:12:24.041 --rc genhtml_legend=1 00:12:24.041 --rc geninfo_all_blocks=1 00:12:24.041 --rc geninfo_unexecuted_blocks=1 00:12:24.041 00:12:24.041 ' 00:12:24.041 10:03:54 blockdev_nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:24.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.041 --rc genhtml_branch_coverage=1 00:12:24.041 --rc genhtml_function_coverage=1 00:12:24.041 --rc genhtml_legend=1 00:12:24.041 --rc geninfo_all_blocks=1 00:12:24.041 --rc geninfo_unexecuted_blocks=1 00:12:24.041 00:12:24.041 ' 00:12:24.041 10:03:54 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:24.041 10:03:54 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:12:24.041 10:03:54 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:12:24.041 10:03:54 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:24.041 10:03:54 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:12:24.041 10:03:54 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:12:24.041 10:03:54 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:12:24.041 10:03:54 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:12:24.041 10:03:54 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:12:24.041 10:03:54 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:12:24.041 10:03:54 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:12:24.041 10:03:54 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:12:24.041 10:03:54 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:12:24.041 10:03:54 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:12:24.041 10:03:54 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:12:24.041 10:03:54 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:12:24.041 10:03:54 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:12:24.041 10:03:54 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:12:24.041 10:03:54 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:12:24.041 10:03:54 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:12:24.041 10:03:54 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:12:24.041 10:03:54 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:12:24.041 10:03:54 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:12:24.041 10:03:54 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:12:24.041 10:03:54 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61347 00:12:24.041 10:03:54 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:12:24.041 10:03:54 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:24.041 10:03:54 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 61347 00:12:24.041 10:03:54 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 61347 ']' 00:12:24.041 10:03:54 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:24.041 10:03:54 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:24.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:24.041 10:03:54 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:24.041 10:03:54 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:24.041 10:03:54 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:24.300 [2024-12-09 10:03:54.975530] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:12:24.300 [2024-12-09 10:03:54.975755] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61347 ] 00:12:24.556 [2024-12-09 10:03:55.176568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:24.813 [2024-12-09 10:03:55.361558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:25.749 10:03:56 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:25.749 10:03:56 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:12:25.749 10:03:56 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:12:25.749 10:03:56 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:12:25.749 10:03:56 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:12:25.749 10:03:56 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:12:25.749 10:03:56 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:25.749 10:03:56 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:12:25.749 10:03:56 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.749 10:03:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:26.317 10:03:56 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.317 10:03:56 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:12:26.317 10:03:56 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.317 10:03:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:26.317 10:03:56 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.317 10:03:56 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:12:26.317 10:03:56 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:12:26.317 10:03:56 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.317 10:03:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:26.317 10:03:56 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.317 10:03:56 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:12:26.317 10:03:56 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.317 10:03:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:26.317 10:03:56 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.317 10:03:56 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:12:26.317 10:03:56 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.317 10:03:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:26.317 10:03:56 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.317 10:03:56 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:12:26.317 10:03:56 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:12:26.317 10:03:56 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:12:26.317 10:03:56 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.317 10:03:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:26.317 10:03:56 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.317 10:03:56 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:12:26.317 10:03:56 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:12:26.318 10:03:56 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "9dc3fcbb-8a88-4284-95a5-578c08adf9b0"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "9dc3fcbb-8a88-4284-95a5-578c08adf9b0",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "a7f4be68-1e44-4b2d-a662-339b68bd9338"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "a7f4be68-1e44-4b2d-a662-339b68bd9338",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "3affe0b4-6423-4b90-bf6a-26bb25518ec0"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "3affe0b4-6423-4b90-bf6a-26bb25518ec0",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "780100c2-2380-443a-87a0-23833bcf1d1c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "780100c2-2380-443a-87a0-23833bcf1d1c",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "b40a96dd-2444-49ae-a118-7e3df453b00a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b40a96dd-2444-49ae-a118-7e3df453b00a",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "091c7e53-169e-4740-9d1f-696e4f81986e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "091c7e53-169e-4740-9d1f-696e4f81986e",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:12:26.318 10:03:57 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:12:26.318 10:03:57 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:12:26.318 10:03:57 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:12:26.318 10:03:57 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 61347 00:12:26.318 10:03:57 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 61347 ']' 00:12:26.318 10:03:57 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 61347 00:12:26.318 10:03:57 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:12:26.318 10:03:57 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:26.318 10:03:57 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61347 00:12:26.318 10:03:57 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:26.318 10:03:57 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:26.318 killing process with pid 61347 00:12:26.318 10:03:57 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61347' 00:12:26.318 10:03:57 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 61347 00:12:26.318 10:03:57 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 61347 00:12:28.850 10:03:59 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:28.850 10:03:59 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:12:28.850 10:03:59 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:28.850 10:03:59 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:28.850 10:03:59 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:28.850 ************************************ 00:12:28.850 START TEST bdev_hello_world 00:12:28.850 ************************************ 00:12:28.850 10:03:59 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:12:29.108 [2024-12-09 10:03:59.683194] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:12:29.108 [2024-12-09 10:03:59.683385] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61453 ] 00:12:29.108 [2024-12-09 10:03:59.859636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:29.366 [2024-12-09 10:03:59.989372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.934 [2024-12-09 10:04:00.674596] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:12:29.934 [2024-12-09 10:04:00.674666] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:12:29.934 [2024-12-09 10:04:00.674700] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:12:29.934 [2024-12-09 10:04:00.678281] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:12:29.934 [2024-12-09 10:04:00.678979] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:12:29.934 [2024-12-09 10:04:00.679031] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:12:29.934 [2024-12-09 10:04:00.679211] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:12:29.934 00:12:29.934 [2024-12-09 10:04:00.679255] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:12:31.310 00:12:31.311 real 0m2.349s 00:12:31.311 user 0m1.913s 00:12:31.311 sys 0m0.323s 00:12:31.311 10:04:01 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:31.311 10:04:01 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:12:31.311 ************************************ 00:12:31.311 END TEST bdev_hello_world 00:12:31.311 ************************************ 00:12:31.311 10:04:01 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:12:31.311 10:04:01 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:31.311 10:04:01 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:31.311 10:04:01 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:31.311 ************************************ 00:12:31.311 START TEST bdev_bounds 00:12:31.311 ************************************ 00:12:31.311 10:04:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:12:31.311 10:04:01 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61495 00:12:31.311 10:04:01 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:12:31.311 10:04:01 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61495' 00:12:31.311 Process bdevio pid: 61495 00:12:31.311 10:04:01 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61495 00:12:31.311 10:04:01 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:31.311 10:04:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61495 ']' 00:12:31.311 10:04:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.311 10:04:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:31.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:31.311 10:04:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.311 10:04:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:31.311 10:04:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:31.311 [2024-12-09 10:04:02.089766] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:12:31.311 [2024-12-09 10:04:02.089984] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61495 ] 00:12:31.570 [2024-12-09 10:04:02.286359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:31.828 [2024-12-09 10:04:02.459377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:31.828 [2024-12-09 10:04:02.459542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.828 [2024-12-09 10:04:02.459595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:32.766 10:04:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:32.766 10:04:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:12:32.766 10:04:03 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:12:32.766 I/O targets: 00:12:32.766 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:12:32.766 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:12:32.766 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:32.766 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:32.766 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:32.766 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:12:32.766 00:12:32.766 00:12:32.766 CUnit - A unit testing framework for C - Version 2.1-3 00:12:32.766 http://cunit.sourceforge.net/ 00:12:32.766 00:12:32.766 00:12:32.766 Suite: bdevio tests on: Nvme3n1 00:12:32.766 Test: blockdev write read block ...passed 00:12:32.766 Test: blockdev write zeroes read block ...passed 00:12:32.766 Test: blockdev write zeroes read no split ...passed 00:12:32.766 Test: blockdev write zeroes read split ...passed 00:12:32.766 Test: blockdev write zeroes read split partial ...passed 00:12:32.766 Test: blockdev reset ...[2024-12-09 10:04:03.401866] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:12:32.766 [2024-12-09 10:04:03.406835] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:12:32.766 passed 00:12:32.766 Test: blockdev write read 8 blocks ...passed 00:12:32.766 Test: blockdev write read size > 128k ...passed 00:12:32.766 Test: blockdev write read invalid size ...passed 00:12:32.766 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:32.766 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:32.767 Test: blockdev write read max offset ...passed 00:12:32.767 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:32.767 Test: blockdev writev readv 8 blocks ...passed 00:12:32.767 Test: blockdev writev readv 30 x 1block ...passed 00:12:32.767 Test: blockdev writev readv block ...passed 00:12:32.767 Test: blockdev writev readv size > 128k ...passed 00:12:32.767 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:32.767 Test: blockdev comparev and writev ...passed 00:12:32.767 Test: blockdev nvme passthru rw ...[2024-12-09 10:04:03.416100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2baa0a000 len:0x1000 00:12:32.767 [2024-12-09 10:04:03.416228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:32.767 passed 00:12:32.767 Test: blockdev nvme passthru vendor specific ...[2024-12-09 10:04:03.417181] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:12:32.767 passed 00:12:32.767 Test: blockdev nvme admin passthru ...[2024-12-09 10:04:03.417262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:12:32.767 passed 00:12:32.767 Test: blockdev copy ...passed 00:12:32.767 Suite: bdevio tests on: Nvme2n3 00:12:32.767 Test: blockdev write read block ...passed 00:12:32.767 Test: blockdev write zeroes read block ...passed 00:12:32.767 Test: blockdev write zeroes read no split ...passed 00:12:32.767 Test: blockdev write zeroes read split ...passed 00:12:32.767 Test: blockdev write zeroes read split partial ...passed 00:12:32.767 Test: blockdev reset ...[2024-12-09 10:04:03.495554] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:12:32.767 passed 00:12:32.767 Test: blockdev write read 8 blocks ...[2024-12-09 10:04:03.501005] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:12:32.767 passed 00:12:32.767 Test: blockdev write read size > 128k ...passed 00:12:32.767 Test: blockdev write read invalid size ...passed 00:12:32.767 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:32.767 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:32.767 Test: blockdev write read max offset ...passed 00:12:32.767 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:32.767 Test: blockdev writev readv 8 blocks ...passed 00:12:32.767 Test: blockdev writev readv 30 x 1block ...passed 00:12:32.767 Test: blockdev writev readv block ...passed 00:12:32.767 Test: blockdev writev readv size > 128k ...passed 00:12:32.767 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:32.767 Test: blockdev comparev and writev ...[2024-12-09 10:04:03.513391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x29d406000 len:0x1000 00:12:32.767 [2024-12-09 10:04:03.513501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:32.767 passed 00:12:32.767 Test: blockdev nvme passthru rw ...passed 00:12:32.767 Test: blockdev nvme passthru vendor specific ...passed 00:12:32.767 Test: blockdev nvme admin passthru ...[2024-12-09 10:04:03.514528] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:12:32.767 [2024-12-09 10:04:03.514573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:12:32.767 passed 00:12:32.767 Test: blockdev copy ...passed 00:12:32.767 Suite: bdevio tests on: Nvme2n2 00:12:32.767 Test: blockdev write read block ...passed 00:12:32.767 Test: blockdev write zeroes read block ...passed 00:12:32.767 Test: blockdev write zeroes read no split ...passed 00:12:32.767 Test: blockdev write zeroes read split ...passed 00:12:33.026 Test: blockdev write zeroes read split partial ...passed 00:12:33.026 Test: blockdev reset ...[2024-12-09 10:04:03.586032] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:12:33.026 passed 00:12:33.026 Test: blockdev write read 8 blocks ...passed 00:12:33.026 Test: blockdev write read size > 128k ...passed 00:12:33.026 Test: blockdev write read invalid size ...passed 00:12:33.026 Test: blockdev write read offset + nbytes == size of blockdev ...[2024-12-09 10:04:03.591533] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:12:33.026 passed 00:12:33.026 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:33.026 Test: blockdev write read max offset ...passed 00:12:33.026 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:33.026 Test: blockdev writev readv 8 blocks ...passed 00:12:33.026 Test: blockdev writev readv 30 x 1block ...passed 00:12:33.026 Test: blockdev writev readv block ...passed 00:12:33.026 Test: blockdev writev readv size > 128k ...passed 00:12:33.026 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:33.026 Test: blockdev comparev and writev ...[2024-12-09 10:04:03.603146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2caa3c000 len:0x1000 00:12:33.026 passed 00:12:33.026 Test: blockdev nvme passthru rw ...[2024-12-09 10:04:03.603228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:33.026 passed 00:12:33.026 Test: blockdev nvme passthru vendor specific ...passed 00:12:33.026 Test: blockdev nvme admin passthru ...[2024-12-09 10:04:03.604256] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:12:33.026 [2024-12-09 10:04:03.604300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:12:33.026 passed 00:12:33.026 Test: blockdev copy ...passed 00:12:33.026 Suite: bdevio tests on: Nvme2n1 00:12:33.026 Test: blockdev write read block ...passed 00:12:33.026 Test: blockdev write zeroes read block ...passed 00:12:33.026 Test: blockdev write zeroes read no split ...passed 00:12:33.026 Test: blockdev write zeroes read split ...passed 00:12:33.026 Test: blockdev write zeroes read split partial ...passed 00:12:33.026 Test: blockdev reset ...[2024-12-09 10:04:03.679251] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:12:33.026 passed 00:12:33.026 Test: blockdev write read 8 blocks ...[2024-12-09 10:04:03.684426] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:12:33.026 passed 00:12:33.026 Test: blockdev write read size > 128k ...passed 00:12:33.027 Test: blockdev write read invalid size ...passed 00:12:33.027 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:33.027 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:33.027 Test: blockdev write read max offset ...passed 00:12:33.027 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:33.027 Test: blockdev writev readv 8 blocks ...passed 00:12:33.027 Test: blockdev writev readv 30 x 1block ...passed 00:12:33.027 Test: blockdev writev readv block ...passed 00:12:33.027 Test: blockdev writev readv size > 128k ...passed 00:12:33.027 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:33.027 Test: blockdev comparev and writev ...passed 00:12:33.027 Test: blockdev nvme passthru rw ...[2024-12-09 10:04:03.694549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2caa38000 len:0x1000 00:12:33.027 [2024-12-09 10:04:03.694643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:33.027 passed 00:12:33.027 Test: blockdev nvme passthru vendor specific ...passed 00:12:33.027 Test: blockdev nvme admin passthru ...[2024-12-09 10:04:03.695574] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:12:33.027 [2024-12-09 10:04:03.695618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:12:33.027 passed 00:12:33.027 Test: blockdev copy ...passed 00:12:33.027 Suite: bdevio tests on: Nvme1n1 00:12:33.027 Test: blockdev write read block ...passed 00:12:33.027 Test: blockdev write zeroes read block ...passed 00:12:33.027 Test: blockdev write zeroes read no split ...passed 00:12:33.027 Test: blockdev write zeroes read split ...passed 00:12:33.027 Test: blockdev write zeroes read split partial ...passed 00:12:33.027 Test: blockdev reset ...[2024-12-09 10:04:03.774871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:12:33.027 passed 00:12:33.027 Test: blockdev write read 8 blocks ...[2024-12-09 10:04:03.779388] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:12:33.027 passed 00:12:33.027 Test: blockdev write read size > 128k ...passed 00:12:33.027 Test: blockdev write read invalid size ...passed 00:12:33.027 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:33.027 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:33.027 Test: blockdev write read max offset ...passed 00:12:33.027 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:33.027 Test: blockdev writev readv 8 blocks ...passed 00:12:33.027 Test: blockdev writev readv 30 x 1block ...passed 00:12:33.027 Test: blockdev writev readv block ...passed 00:12:33.027 Test: blockdev writev readv size > 128k ...passed 00:12:33.027 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:33.027 Test: blockdev comparev and writev ...[2024-12-09 10:04:03.789523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2caa34000 len:0x1000 00:12:33.027 [2024-12-09 10:04:03.789634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:33.027 passed 00:12:33.027 Test: blockdev nvme passthru rw ...passed 00:12:33.027 Test: blockdev nvme passthru vendor specific ...passed 00:12:33.027 Test: blockdev nvme admin passthru ...[2024-12-09 10:04:03.790647] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:12:33.027 [2024-12-09 10:04:03.790697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:12:33.027 passed 00:12:33.027 Test: blockdev copy ...passed 00:12:33.027 Suite: bdevio tests on: Nvme0n1 00:12:33.027 Test: blockdev write read block ...passed 00:12:33.027 Test: blockdev write zeroes read block ...passed 00:12:33.027 Test: blockdev write zeroes read no split ...passed 00:12:33.286 Test: blockdev write zeroes read split ...passed 00:12:33.286 Test: blockdev write zeroes read split partial ...passed 00:12:33.286 Test: blockdev reset ...[2024-12-09 10:04:03.869793] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:12:33.286 passed 00:12:33.286 Test: blockdev write read 8 blocks ...[2024-12-09 10:04:03.874061] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:12:33.286 passed 00:12:33.286 Test: blockdev write read size > 128k ...passed 00:12:33.286 Test: blockdev write read invalid size ...passed 00:12:33.286 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:33.286 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:33.286 Test: blockdev write read max offset ...passed 00:12:33.286 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:33.286 Test: blockdev writev readv 8 blocks ...passed 00:12:33.286 Test: blockdev writev readv 30 x 1block ...passed 00:12:33.286 Test: blockdev writev readv block ...passed 00:12:33.286 Test: blockdev writev readv size > 128k ...passed 00:12:33.286 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:33.286 Test: blockdev comparev and writev ...[2024-12-09 10:04:03.882922] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:12:33.286 separate metadata which is not supported yet. 00:12:33.286 passed 00:12:33.286 Test: blockdev nvme passthru rw ...passed 00:12:33.286 Test: blockdev nvme passthru vendor specific ...[2024-12-09 10:04:03.883511] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:12:33.286 [2024-12-09 10:04:03.883585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:12:33.286 passed 00:12:33.286 Test: blockdev nvme admin passthru ...passed 00:12:33.286 Test: blockdev copy ...passed 00:12:33.286 00:12:33.286 Run Summary: Type Total Ran Passed Failed Inactive 00:12:33.286 suites 6 6 n/a 0 0 00:12:33.286 tests 138 138 138 0 0 00:12:33.286 asserts 893 893 893 0 n/a 00:12:33.286 00:12:33.286 Elapsed time = 1.497 seconds 00:12:33.286 0 00:12:33.286 10:04:03 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61495 00:12:33.286 10:04:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61495 ']' 00:12:33.286 10:04:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61495 00:12:33.286 10:04:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:12:33.286 10:04:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:33.286 10:04:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61495 00:12:33.286 10:04:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:33.286 killing process with pid 61495 00:12:33.286 10:04:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:33.286 10:04:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61495' 00:12:33.286 10:04:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61495 00:12:33.286 10:04:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61495 00:12:34.662 10:04:05 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:12:34.662 00:12:34.662 real 0m3.135s 00:12:34.662 user 0m7.803s 00:12:34.662 sys 0m0.536s 00:12:34.662 10:04:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:34.662 ************************************ 00:12:34.662 END TEST bdev_bounds 00:12:34.662 ************************************ 00:12:34.662 10:04:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:34.662 10:04:05 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:12:34.662 10:04:05 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:34.662 10:04:05 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:34.662 10:04:05 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:34.662 ************************************ 00:12:34.662 START TEST bdev_nbd 00:12:34.662 ************************************ 00:12:34.662 10:04:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:12:34.662 10:04:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:12:34.662 10:04:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:12:34.662 10:04:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:34.662 10:04:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:34.663 10:04:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:12:34.663 10:04:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:12:34.663 10:04:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:12:34.663 10:04:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:12:34.663 10:04:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:34.663 10:04:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:12:34.663 10:04:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:12:34.663 10:04:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:34.663 10:04:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:12:34.663 10:04:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:12:34.663 10:04:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:12:34.663 10:04:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61560 00:12:34.663 10:04:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:12:34.663 10:04:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:34.663 10:04:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61560 /var/tmp/spdk-nbd.sock 00:12:34.663 10:04:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61560 ']' 00:12:34.663 10:04:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:34.663 10:04:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:34.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:34.663 10:04:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:34.663 10:04:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:34.663 10:04:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:12:34.663 [2024-12-09 10:04:05.287608] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:12:34.663 [2024-12-09 10:04:05.287840] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:34.922 [2024-12-09 10:04:05.490073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:34.922 [2024-12-09 10:04:05.660203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.858 10:04:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:35.858 10:04:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:12:35.858 10:04:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:12:35.858 10:04:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:35.858 10:04:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:12:35.858 10:04:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:12:35.858 10:04:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:12:35.858 10:04:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:35.858 10:04:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:12:35.858 10:04:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:12:35.858 10:04:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:12:35.858 10:04:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:12:35.858 10:04:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:12:35.858 10:04:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:35.858 10:04:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:12:36.117 10:04:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:12:36.117 10:04:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:12:36.117 10:04:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:12:36.117 10:04:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:36.117 10:04:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:36.117 10:04:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:36.117 10:04:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:36.117 10:04:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:36.117 10:04:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:36.117 10:04:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:36.117 10:04:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:36.117 10:04:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:36.117 1+0 records in 00:12:36.117 1+0 records out 00:12:36.117 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000591055 s, 6.9 MB/s 00:12:36.117 10:04:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:36.117 10:04:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:36.117 10:04:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:36.117 10:04:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:36.117 10:04:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:36.117 10:04:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:36.117 10:04:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:36.117 10:04:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:12:36.375 10:04:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:12:36.375 10:04:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:12:36.375 10:04:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:12:36.375 10:04:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:36.375 10:04:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:36.375 10:04:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:36.375 10:04:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:36.375 10:04:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:36.375 10:04:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:36.375 10:04:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:36.375 10:04:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:36.375 10:04:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:36.375 1+0 records in 00:12:36.375 1+0 records out 00:12:36.375 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000587025 s, 7.0 MB/s 00:12:36.375 10:04:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:36.375 10:04:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:36.376 10:04:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:36.376 10:04:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:36.376 10:04:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:36.376 10:04:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:36.376 10:04:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:36.376 10:04:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:12:36.634 10:04:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:12:36.634 10:04:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:12:36.634 10:04:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:12:36.634 10:04:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:12:36.634 10:04:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:36.634 10:04:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:36.634 10:04:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:36.634 10:04:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:12:36.893 10:04:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:36.893 10:04:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:36.893 10:04:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:36.893 10:04:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:36.893 1+0 records in 00:12:36.893 1+0 records out 00:12:36.893 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000585467 s, 7.0 MB/s 00:12:36.893 10:04:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:36.893 10:04:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:36.893 10:04:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:36.893 10:04:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:36.893 10:04:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:36.893 10:04:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:36.893 10:04:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:36.893 10:04:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:12:37.153 10:04:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:12:37.153 10:04:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:12:37.153 10:04:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:12:37.153 10:04:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:12:37.153 10:04:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:37.153 10:04:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:37.153 10:04:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:37.153 10:04:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:12:37.153 10:04:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:37.153 10:04:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:37.153 10:04:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:37.153 10:04:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:37.153 1+0 records in 00:12:37.153 1+0 records out 00:12:37.153 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000672823 s, 6.1 MB/s 00:12:37.153 10:04:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:37.153 10:04:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:37.153 10:04:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:37.153 10:04:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:37.153 10:04:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:37.153 10:04:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:37.153 10:04:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:37.153 10:04:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:12:37.411 10:04:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:12:37.411 10:04:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:12:37.411 10:04:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:12:37.411 10:04:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:12:37.411 10:04:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:37.411 10:04:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:37.411 10:04:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:37.411 10:04:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:12:37.411 10:04:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:37.411 10:04:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:37.411 10:04:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:37.411 10:04:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:37.411 1+0 records in 00:12:37.411 1+0 records out 00:12:37.411 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00073384 s, 5.6 MB/s 00:12:37.411 10:04:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:37.412 10:04:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:37.412 10:04:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:37.412 10:04:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:37.412 10:04:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:37.412 10:04:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:37.412 10:04:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:37.412 10:04:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:12:38.009 10:04:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:12:38.009 10:04:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:12:38.010 10:04:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:12:38.010 10:04:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:12:38.010 10:04:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:38.010 10:04:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:38.010 10:04:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:38.010 10:04:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:12:38.010 10:04:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:38.010 10:04:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:38.010 10:04:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:38.010 10:04:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:38.010 1+0 records in 00:12:38.010 1+0 records out 00:12:38.010 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00100298 s, 4.1 MB/s 00:12:38.010 10:04:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:38.010 10:04:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:38.010 10:04:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:38.010 10:04:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:38.010 10:04:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:38.010 10:04:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:38.010 10:04:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:38.010 10:04:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:38.268 10:04:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:12:38.268 { 00:12:38.268 "nbd_device": "/dev/nbd0", 00:12:38.268 "bdev_name": "Nvme0n1" 00:12:38.268 }, 00:12:38.268 { 00:12:38.268 "nbd_device": "/dev/nbd1", 00:12:38.268 "bdev_name": "Nvme1n1" 00:12:38.268 }, 00:12:38.268 { 00:12:38.268 "nbd_device": "/dev/nbd2", 00:12:38.268 "bdev_name": "Nvme2n1" 00:12:38.268 }, 00:12:38.268 { 00:12:38.268 "nbd_device": "/dev/nbd3", 00:12:38.268 "bdev_name": "Nvme2n2" 00:12:38.268 }, 00:12:38.268 { 00:12:38.268 "nbd_device": "/dev/nbd4", 00:12:38.268 "bdev_name": "Nvme2n3" 00:12:38.268 }, 00:12:38.268 { 00:12:38.268 "nbd_device": "/dev/nbd5", 00:12:38.268 "bdev_name": "Nvme3n1" 00:12:38.268 } 00:12:38.268 ]' 00:12:38.268 10:04:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:12:38.268 10:04:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:12:38.268 10:04:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:12:38.268 { 00:12:38.268 "nbd_device": "/dev/nbd0", 00:12:38.268 "bdev_name": "Nvme0n1" 00:12:38.268 }, 00:12:38.268 { 00:12:38.268 "nbd_device": "/dev/nbd1", 00:12:38.268 "bdev_name": "Nvme1n1" 00:12:38.268 }, 00:12:38.268 { 00:12:38.268 "nbd_device": "/dev/nbd2", 00:12:38.268 "bdev_name": "Nvme2n1" 00:12:38.268 }, 00:12:38.268 { 00:12:38.268 "nbd_device": "/dev/nbd3", 00:12:38.268 "bdev_name": "Nvme2n2" 00:12:38.268 }, 00:12:38.268 { 00:12:38.268 "nbd_device": "/dev/nbd4", 00:12:38.268 "bdev_name": "Nvme2n3" 00:12:38.268 }, 00:12:38.268 { 00:12:38.268 "nbd_device": "/dev/nbd5", 00:12:38.268 "bdev_name": "Nvme3n1" 00:12:38.268 } 00:12:38.268 ]' 00:12:38.268 10:04:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:12:38.268 10:04:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:38.268 10:04:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:12:38.268 10:04:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:38.268 10:04:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:38.268 10:04:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:38.268 10:04:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:38.527 10:04:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:38.527 10:04:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:38.527 10:04:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:38.527 10:04:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:38.527 10:04:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:38.527 10:04:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:38.527 10:04:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:38.527 10:04:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:38.527 10:04:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:38.527 10:04:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:38.785 10:04:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:38.785 10:04:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:38.785 10:04:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:38.785 10:04:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:38.785 10:04:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:38.785 10:04:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:38.785 10:04:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:38.785 10:04:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:38.785 10:04:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:38.785 10:04:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:39.353 10:04:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:39.353 10:04:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:39.353 10:04:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:39.353 10:04:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:39.353 10:04:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:39.353 10:04:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:39.353 10:04:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:39.353 10:04:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:39.353 10:04:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:39.353 10:04:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:39.612 10:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:39.612 10:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:39.612 10:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:39.612 10:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:39.612 10:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:39.612 10:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:39.612 10:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:39.612 10:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:39.612 10:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:39.612 10:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:39.871 10:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:39.871 10:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:39.871 10:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:39.871 10:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:39.871 10:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:39.871 10:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:39.871 10:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:39.871 10:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:39.871 10:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:39.871 10:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:40.130 10:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:40.130 10:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:40.130 10:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:40.130 10:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:40.130 10:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:40.130 10:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:40.130 10:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:40.130 10:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:40.130 10:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:40.130 10:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:40.130 10:04:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:40.389 10:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:40.389 10:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:40.389 10:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:40.648 10:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:40.648 10:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:40.648 10:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:12:40.648 10:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:12:40.648 10:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:12:40.648 10:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:12:40.648 10:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:12:40.648 10:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:12:40.648 10:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:12:40.648 10:04:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:40.648 10:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:40.648 10:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:12:40.648 10:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:40.648 10:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:40.648 10:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:40.648 10:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:40.648 10:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:40.648 10:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:12:40.648 10:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:40.648 10:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:40.648 10:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:40.648 10:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:12:40.648 10:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:40.648 10:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:40.648 10:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:12:40.907 /dev/nbd0 00:12:40.907 10:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:40.907 10:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:40.907 10:04:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:40.907 10:04:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:40.907 10:04:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:40.907 10:04:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:40.907 10:04:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:40.907 10:04:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:40.907 10:04:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:40.907 10:04:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:40.907 10:04:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:40.907 1+0 records in 00:12:40.907 1+0 records out 00:12:40.907 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000535739 s, 7.6 MB/s 00:12:40.907 10:04:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:40.907 10:04:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:40.907 10:04:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:40.907 10:04:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:40.907 10:04:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:40.907 10:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:40.907 10:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:40.907 10:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:12:41.166 /dev/nbd1 00:12:41.166 10:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:41.166 10:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:41.166 10:04:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:41.166 10:04:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:41.166 10:04:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:41.166 10:04:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:41.167 10:04:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:41.167 10:04:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:41.167 10:04:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:41.167 10:04:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:41.167 10:04:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:41.167 1+0 records in 00:12:41.167 1+0 records out 00:12:41.167 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000529534 s, 7.7 MB/s 00:12:41.167 10:04:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.167 10:04:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:41.167 10:04:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.167 10:04:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:41.167 10:04:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:41.167 10:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:41.167 10:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:41.167 10:04:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:12:41.429 /dev/nbd10 00:12:41.429 10:04:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:12:41.429 10:04:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:12:41.429 10:04:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:12:41.429 10:04:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:41.429 10:04:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:41.429 10:04:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:41.430 10:04:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:12:41.430 10:04:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:41.430 10:04:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:41.430 10:04:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:41.430 10:04:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:41.689 1+0 records in 00:12:41.689 1+0 records out 00:12:41.689 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000761116 s, 5.4 MB/s 00:12:41.689 10:04:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.689 10:04:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:41.689 10:04:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.689 10:04:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:41.689 10:04:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:41.689 10:04:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:41.689 10:04:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:41.689 10:04:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:12:41.948 /dev/nbd11 00:12:41.949 10:04:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:12:41.949 10:04:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:12:41.949 10:04:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:12:41.949 10:04:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:41.949 10:04:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:41.949 10:04:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:41.949 10:04:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:12:41.949 10:04:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:41.949 10:04:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:41.949 10:04:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:41.949 10:04:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:41.949 1+0 records in 00:12:41.949 1+0 records out 00:12:41.949 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000799024 s, 5.1 MB/s 00:12:41.949 10:04:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.949 10:04:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:41.949 10:04:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.949 10:04:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:41.949 10:04:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:41.949 10:04:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:41.949 10:04:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:41.949 10:04:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:12:42.207 /dev/nbd12 00:12:42.207 10:04:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:12:42.207 10:04:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:12:42.207 10:04:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:12:42.207 10:04:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:42.207 10:04:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:42.207 10:04:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:42.207 10:04:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:12:42.207 10:04:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:42.207 10:04:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:42.207 10:04:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:42.207 10:04:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:42.207 1+0 records in 00:12:42.207 1+0 records out 00:12:42.207 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000738663 s, 5.5 MB/s 00:12:42.207 10:04:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:42.207 10:04:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:42.207 10:04:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:42.207 10:04:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:42.207 10:04:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:42.207 10:04:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:42.207 10:04:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:42.207 10:04:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:12:42.465 /dev/nbd13 00:12:42.724 10:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:12:42.724 10:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:12:42.724 10:04:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:12:42.724 10:04:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:42.724 10:04:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:42.724 10:04:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:42.724 10:04:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:12:42.724 10:04:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:42.724 10:04:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:42.724 10:04:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:42.724 10:04:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:42.724 1+0 records in 00:12:42.724 1+0 records out 00:12:42.724 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000710763 s, 5.8 MB/s 00:12:42.724 10:04:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:42.724 10:04:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:42.724 10:04:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:42.724 10:04:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:42.724 10:04:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:42.724 10:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:42.724 10:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:42.724 10:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:42.724 10:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:42.724 10:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:42.982 10:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:42.982 { 00:12:42.982 "nbd_device": "/dev/nbd0", 00:12:42.982 "bdev_name": "Nvme0n1" 00:12:42.982 }, 00:12:42.982 { 00:12:42.982 "nbd_device": "/dev/nbd1", 00:12:42.982 "bdev_name": "Nvme1n1" 00:12:42.982 }, 00:12:42.982 { 00:12:42.982 "nbd_device": "/dev/nbd10", 00:12:42.982 "bdev_name": "Nvme2n1" 00:12:42.982 }, 00:12:42.982 { 00:12:42.982 "nbd_device": "/dev/nbd11", 00:12:42.982 "bdev_name": "Nvme2n2" 00:12:42.982 }, 00:12:42.982 { 00:12:42.982 "nbd_device": "/dev/nbd12", 00:12:42.982 "bdev_name": "Nvme2n3" 00:12:42.982 }, 00:12:42.982 { 00:12:42.982 "nbd_device": "/dev/nbd13", 00:12:42.982 "bdev_name": "Nvme3n1" 00:12:42.982 } 00:12:42.982 ]' 00:12:42.982 10:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:42.982 { 00:12:42.982 "nbd_device": "/dev/nbd0", 00:12:42.982 "bdev_name": "Nvme0n1" 00:12:42.982 }, 00:12:42.982 { 00:12:42.982 "nbd_device": "/dev/nbd1", 00:12:42.982 "bdev_name": "Nvme1n1" 00:12:42.982 }, 00:12:42.982 { 00:12:42.982 "nbd_device": "/dev/nbd10", 00:12:42.982 "bdev_name": "Nvme2n1" 00:12:42.982 }, 00:12:42.982 { 00:12:42.982 "nbd_device": "/dev/nbd11", 00:12:42.982 "bdev_name": "Nvme2n2" 00:12:42.982 }, 00:12:42.982 { 00:12:42.982 "nbd_device": "/dev/nbd12", 00:12:42.982 "bdev_name": "Nvme2n3" 00:12:42.982 }, 00:12:42.982 { 00:12:42.982 "nbd_device": "/dev/nbd13", 00:12:42.982 "bdev_name": "Nvme3n1" 00:12:42.982 } 00:12:42.982 ]' 00:12:42.982 10:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:42.982 10:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:42.982 /dev/nbd1 00:12:42.982 /dev/nbd10 00:12:42.982 /dev/nbd11 00:12:42.982 /dev/nbd12 00:12:42.982 /dev/nbd13' 00:12:42.982 10:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:42.982 10:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:42.982 /dev/nbd1 00:12:42.982 /dev/nbd10 00:12:42.982 /dev/nbd11 00:12:42.982 /dev/nbd12 00:12:42.982 /dev/nbd13' 00:12:42.983 10:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:12:42.983 10:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:12:42.983 10:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:12:42.983 10:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:12:42.983 10:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:12:42.983 10:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:42.983 10:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:42.983 10:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:42.983 10:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:42.983 10:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:42.983 10:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:12:42.983 256+0 records in 00:12:42.983 256+0 records out 00:12:42.983 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010612 s, 98.8 MB/s 00:12:42.983 10:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:42.983 10:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:43.241 256+0 records in 00:12:43.241 256+0 records out 00:12:43.241 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.172833 s, 6.1 MB/s 00:12:43.241 10:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:43.241 10:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:43.241 256+0 records in 00:12:43.241 256+0 records out 00:12:43.241 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.162808 s, 6.4 MB/s 00:12:43.241 10:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:43.241 10:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:12:43.499 256+0 records in 00:12:43.499 256+0 records out 00:12:43.499 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.155094 s, 6.8 MB/s 00:12:43.499 10:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:43.499 10:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:12:43.758 256+0 records in 00:12:43.758 256+0 records out 00:12:43.758 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.18833 s, 5.6 MB/s 00:12:43.758 10:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:43.758 10:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:12:43.758 256+0 records in 00:12:43.758 256+0 records out 00:12:43.758 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.152056 s, 6.9 MB/s 00:12:43.758 10:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:43.758 10:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:12:44.017 256+0 records in 00:12:44.017 256+0 records out 00:12:44.017 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.181869 s, 5.8 MB/s 00:12:44.017 10:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:12:44.017 10:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:44.017 10:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:44.017 10:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:44.017 10:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:44.017 10:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:44.017 10:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:44.017 10:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:44.017 10:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:12:44.017 10:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:44.017 10:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:12:44.017 10:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:44.017 10:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:12:44.017 10:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:44.017 10:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:12:44.017 10:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:44.017 10:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:12:44.017 10:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:44.017 10:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:12:44.017 10:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:44.017 10:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:44.017 10:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:44.017 10:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:44.017 10:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:44.017 10:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:44.017 10:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:44.017 10:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:44.276 10:04:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:44.276 10:04:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:44.276 10:04:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:44.276 10:04:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:44.276 10:04:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:44.276 10:04:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:44.276 10:04:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:44.276 10:04:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:44.276 10:04:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:44.276 10:04:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:44.845 10:04:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:44.845 10:04:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:44.845 10:04:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:44.845 10:04:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:44.845 10:04:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:44.845 10:04:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:44.845 10:04:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:44.845 10:04:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:44.845 10:04:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:44.845 10:04:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:45.103 10:04:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:45.103 10:04:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:45.103 10:04:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:45.103 10:04:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:45.103 10:04:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:45.103 10:04:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:45.103 10:04:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:45.103 10:04:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:45.103 10:04:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:45.103 10:04:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:45.361 10:04:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:45.361 10:04:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:45.361 10:04:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:45.361 10:04:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:45.361 10:04:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:45.361 10:04:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:45.361 10:04:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:45.361 10:04:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:45.361 10:04:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:45.361 10:04:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:45.621 10:04:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:45.621 10:04:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:45.621 10:04:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:45.621 10:04:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:45.621 10:04:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:45.621 10:04:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:45.621 10:04:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:45.621 10:04:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:45.621 10:04:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:45.621 10:04:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:45.879 10:04:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:45.879 10:04:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:45.879 10:04:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:45.879 10:04:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:45.879 10:04:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:45.879 10:04:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:45.879 10:04:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:45.879 10:04:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:45.879 10:04:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:45.879 10:04:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:45.879 10:04:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:46.138 10:04:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:46.138 10:04:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:46.138 10:04:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:46.397 10:04:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:46.397 10:04:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:46.397 10:04:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:12:46.397 10:04:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:12:46.397 10:04:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:12:46.397 10:04:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:12:46.397 10:04:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:12:46.397 10:04:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:46.397 10:04:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:12:46.397 10:04:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:46.397 10:04:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:46.397 10:04:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:12:46.397 10:04:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:12:46.656 malloc_lvol_verify 00:12:46.656 10:04:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:12:46.914 dc945ffd-eb72-4102-bb6b-08e702a022dd 00:12:46.914 10:04:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:12:47.173 cf379b01-8cfc-4f49-a20a-da6732cdc63f 00:12:47.173 10:04:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:12:47.740 /dev/nbd0 00:12:47.740 10:04:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:12:47.740 10:04:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:12:47.740 10:04:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:12:47.740 10:04:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:12:47.740 10:04:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:12:47.740 mke2fs 1.47.0 (5-Feb-2023) 00:12:47.740 Discarding device blocks: 0/4096 done 00:12:47.740 Creating filesystem with 4096 1k blocks and 1024 inodes 00:12:47.740 00:12:47.740 Allocating group tables: 0/1 done 00:12:47.740 Writing inode tables: 0/1 done 00:12:47.740 Creating journal (1024 blocks): done 00:12:47.740 Writing superblocks and filesystem accounting information: 0/1 done 00:12:47.740 00:12:47.740 10:04:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:47.740 10:04:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:47.740 10:04:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:47.740 10:04:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:47.740 10:04:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:47.740 10:04:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:47.740 10:04:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:47.998 10:04:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:47.998 10:04:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:47.998 10:04:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:47.998 10:04:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:47.998 10:04:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:47.998 10:04:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:47.998 10:04:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:47.998 10:04:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:47.998 10:04:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61560 00:12:47.998 10:04:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61560 ']' 00:12:47.998 10:04:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61560 00:12:47.998 10:04:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:12:47.998 10:04:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:47.998 10:04:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61560 00:12:47.998 10:04:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:47.998 10:04:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:47.998 killing process with pid 61560 00:12:47.998 10:04:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61560' 00:12:47.998 10:04:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61560 00:12:47.998 10:04:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61560 00:12:49.390 10:04:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:12:49.390 00:12:49.390 real 0m14.954s 00:12:49.390 user 0m21.443s 00:12:49.390 sys 0m4.738s 00:12:49.390 10:04:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:49.390 10:04:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:12:49.390 ************************************ 00:12:49.390 END TEST bdev_nbd 00:12:49.390 ************************************ 00:12:49.390 10:04:20 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:12:49.390 10:04:20 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:12:49.390 skipping fio tests on NVMe due to multi-ns failures. 00:12:49.390 10:04:20 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:12:49.390 10:04:20 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:49.390 10:04:20 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:49.390 10:04:20 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:12:49.390 10:04:20 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:49.390 10:04:20 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:49.390 ************************************ 00:12:49.390 START TEST bdev_verify 00:12:49.390 ************************************ 00:12:49.390 10:04:20 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:49.648 [2024-12-09 10:04:20.286401] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:12:49.648 [2024-12-09 10:04:20.286622] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61990 ] 00:12:49.906 [2024-12-09 10:04:20.481680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:49.906 [2024-12-09 10:04:20.655041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:49.906 [2024-12-09 10:04:20.655041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.842 Running I/O for 5 seconds... 00:12:53.154 17600.00 IOPS, 68.75 MiB/s [2024-12-09T10:04:24.906Z] 17024.00 IOPS, 66.50 MiB/s [2024-12-09T10:04:25.841Z] 16554.67 IOPS, 64.67 MiB/s [2024-12-09T10:04:26.776Z] 16704.00 IOPS, 65.25 MiB/s [2024-12-09T10:04:26.776Z] 16588.80 IOPS, 64.80 MiB/s 00:12:55.979 Latency(us) 00:12:55.979 [2024-12-09T10:04:26.776Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:55.979 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:55.979 Verification LBA range: start 0x0 length 0xbd0bd 00:12:55.979 Nvme0n1 : 5.05 1393.91 5.44 0.00 0.00 91503.15 22282.24 82456.20 00:12:55.979 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:55.979 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:12:55.979 Nvme0n1 : 5.06 1340.97 5.24 0.00 0.00 95159.63 21090.68 114390.11 00:12:55.979 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:55.979 Verification LBA range: start 0x0 length 0xa0000 00:12:55.979 Nvme1n1 : 5.05 1392.96 5.44 0.00 0.00 91399.74 25141.99 79596.45 00:12:55.979 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:55.979 Verification LBA range: start 0xa0000 length 0xa0000 00:12:55.979 Nvme1n1 : 5.06 1339.96 5.23 0.00 0.00 94966.66 22639.71 110100.48 00:12:55.979 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:55.979 Verification LBA range: start 0x0 length 0x80000 00:12:55.979 Nvme2n1 : 5.07 1401.51 5.47 0.00 0.00 90736.29 7685.59 76260.07 00:12:55.979 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:55.979 Verification LBA range: start 0x80000 length 0x80000 00:12:55.979 Nvme2n1 : 5.06 1339.47 5.23 0.00 0.00 94768.51 22639.71 107240.73 00:12:55.979 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:55.979 Verification LBA range: start 0x0 length 0x80000 00:12:55.979 Nvme2n2 : 5.07 1401.08 5.47 0.00 0.00 90608.04 8043.05 78166.57 00:12:55.979 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:55.979 Verification LBA range: start 0x80000 length 0x80000 00:12:55.979 Nvme2n2 : 5.07 1339.06 5.23 0.00 0.00 94566.94 22282.24 104857.60 00:12:55.979 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:55.979 Verification LBA range: start 0x0 length 0x80000 00:12:55.979 Nvme2n3 : 5.07 1400.55 5.47 0.00 0.00 90487.87 8400.52 81502.95 00:12:55.980 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:55.980 Verification LBA range: start 0x80000 length 0x80000 00:12:55.980 Nvme2n3 : 5.08 1348.19 5.27 0.00 0.00 93796.84 4885.41 111530.36 00:12:55.980 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:55.980 Verification LBA range: start 0x0 length 0x20000 00:12:55.980 Nvme3n1 : 5.07 1400.07 5.47 0.00 0.00 90364.66 8162.21 83409.45 00:12:55.980 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:55.980 Verification LBA range: start 0x20000 length 0x20000 00:12:55.980 Nvme3n1 : 5.10 1354.24 5.29 0.00 0.00 93343.74 10247.45 115343.36 00:12:55.980 [2024-12-09T10:04:26.777Z] =================================================================================================================== 00:12:55.980 [2024-12-09T10:04:26.777Z] Total : 16451.98 64.27 0.00 0.00 92604.37 4885.41 115343.36 00:12:57.354 00:12:57.354 real 0m7.841s 00:12:57.354 user 0m14.196s 00:12:57.354 sys 0m0.404s 00:12:57.354 10:04:28 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:57.354 10:04:28 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:12:57.354 ************************************ 00:12:57.354 END TEST bdev_verify 00:12:57.354 ************************************ 00:12:57.354 10:04:28 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:57.354 10:04:28 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:12:57.354 10:04:28 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:57.354 10:04:28 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:57.354 ************************************ 00:12:57.354 START TEST bdev_verify_big_io 00:12:57.354 ************************************ 00:12:57.354 10:04:28 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:57.612 [2024-12-09 10:04:28.168121] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:12:57.612 [2024-12-09 10:04:28.168293] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62098 ] 00:12:57.612 [2024-12-09 10:04:28.346687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:57.869 [2024-12-09 10:04:28.487737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.869 [2024-12-09 10:04:28.487763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:58.804 Running I/O for 5 seconds... 00:13:03.865 1633.00 IOPS, 102.06 MiB/s [2024-12-09T10:04:35.234Z] 2686.00 IOPS, 167.88 MiB/s [2024-12-09T10:04:35.234Z] 3126.67 IOPS, 195.42 MiB/s 00:13:04.437 Latency(us) 00:13:04.437 [2024-12-09T10:04:35.234Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:04.437 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:04.437 Verification LBA range: start 0x0 length 0xbd0b 00:13:04.437 Nvme0n1 : 5.50 139.54 8.72 0.00 0.00 893243.50 18230.92 865551.83 00:13:04.438 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:04.438 Verification LBA range: start 0xbd0b length 0xbd0b 00:13:04.438 Nvme0n1 : 5.67 134.67 8.42 0.00 0.00 931759.17 20018.27 999006.95 00:13:04.438 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:04.438 Verification LBA range: start 0x0 length 0xa000 00:13:04.438 Nvme1n1 : 5.51 139.46 8.72 0.00 0.00 874418.42 90082.21 838860.80 00:13:04.438 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:04.438 Verification LBA range: start 0xa000 length 0xa000 00:13:04.438 Nvme1n1 : 5.68 132.29 8.27 0.00 0.00 915548.95 94848.47 865551.83 00:13:04.438 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:04.438 Verification LBA range: start 0x0 length 0x8000 00:13:04.438 Nvme2n1 : 5.59 141.07 8.82 0.00 0.00 841409.15 82932.83 865551.83 00:13:04.438 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:04.438 Verification LBA range: start 0x8000 length 0x8000 00:13:04.438 Nvme2n1 : 5.70 135.32 8.46 0.00 0.00 873513.51 36461.85 930372.89 00:13:04.438 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:04.438 Verification LBA range: start 0x0 length 0x8000 00:13:04.438 Nvme2n2 : 5.62 148.04 9.25 0.00 0.00 792653.84 23354.65 884616.84 00:13:04.438 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:04.438 Verification LBA range: start 0x8000 length 0x8000 00:13:04.438 Nvme2n2 : 5.72 139.22 8.70 0.00 0.00 829702.68 18826.71 1227787.17 00:13:04.438 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:04.438 Verification LBA range: start 0x0 length 0x8000 00:13:04.438 Nvme2n3 : 5.65 155.18 9.70 0.00 0.00 742417.72 22758.87 899868.86 00:13:04.438 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:04.438 Verification LBA range: start 0x8000 length 0x8000 00:13:04.438 Nvme2n3 : 5.74 137.38 8.59 0.00 0.00 809952.32 17158.52 1731103.65 00:13:04.438 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:04.438 Verification LBA range: start 0x0 length 0x2000 00:13:04.438 Nvme3n1 : 5.66 161.65 10.10 0.00 0.00 695063.57 5600.35 907494.87 00:13:04.438 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:04.438 Verification LBA range: start 0x2000 length 0x2000 00:13:04.438 Nvme3n1 : 5.83 172.53 10.78 0.00 0.00 625812.23 1020.28 1769233.69 00:13:04.438 [2024-12-09T10:04:35.235Z] =================================================================================================================== 00:13:04.438 [2024-12-09T10:04:35.235Z] Total : 1736.37 108.52 0.00 0.00 811112.42 1020.28 1769233.69 00:13:06.970 00:13:06.970 real 0m9.683s 00:13:06.970 user 0m17.905s 00:13:06.970 sys 0m0.419s 00:13:06.970 10:04:37 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:06.970 10:04:37 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.970 ************************************ 00:13:06.970 END TEST bdev_verify_big_io 00:13:06.970 ************************************ 00:13:07.228 10:04:37 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:07.228 10:04:37 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:13:07.228 10:04:37 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:07.228 10:04:37 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:13:07.228 ************************************ 00:13:07.228 START TEST bdev_write_zeroes 00:13:07.228 ************************************ 00:13:07.228 10:04:37 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:07.228 [2024-12-09 10:04:37.924739] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:13:07.228 [2024-12-09 10:04:37.924933] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62214 ] 00:13:07.486 [2024-12-09 10:04:38.112100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:07.744 [2024-12-09 10:04:38.283450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.310 Running I/O for 1 seconds... 00:13:09.683 48320.00 IOPS, 188.75 MiB/s 00:13:09.683 Latency(us) 00:13:09.683 [2024-12-09T10:04:40.480Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:09.683 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:09.683 Nvme0n1 : 1.03 8030.35 31.37 0.00 0.00 15895.22 7864.32 31695.59 00:13:09.683 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:09.683 Nvme1n1 : 1.03 8017.71 31.32 0.00 0.00 15890.48 11856.06 30980.65 00:13:09.683 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:09.683 Nvme2n1 : 1.03 8005.41 31.27 0.00 0.00 15861.89 11021.96 30384.87 00:13:09.683 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:09.683 Nvme2n2 : 1.03 7992.87 31.22 0.00 0.00 15802.97 7357.91 29789.09 00:13:09.684 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:09.684 Nvme2n3 : 1.03 7980.54 31.17 0.00 0.00 15799.58 7268.54 29789.09 00:13:09.684 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:09.684 Nvme3n1 : 1.04 7906.73 30.89 0.00 0.00 15916.11 11677.32 32172.22 00:13:09.684 [2024-12-09T10:04:40.481Z] =================================================================================================================== 00:13:09.684 [2024-12-09T10:04:40.481Z] Total : 47933.60 187.24 0.00 0.00 15860.97 7268.54 32172.22 00:13:10.618 00:13:10.618 real 0m3.582s 00:13:10.618 user 0m3.116s 00:13:10.618 sys 0m0.339s 00:13:10.618 10:04:41 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:10.618 ************************************ 00:13:10.618 END TEST bdev_write_zeroes 00:13:10.618 10:04:41 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:13:10.618 ************************************ 00:13:10.877 10:04:41 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:10.877 10:04:41 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:13:10.877 10:04:41 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:10.877 10:04:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:13:10.877 ************************************ 00:13:10.877 START TEST bdev_json_nonenclosed 00:13:10.877 ************************************ 00:13:10.877 10:04:41 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:10.877 [2024-12-09 10:04:41.558061] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:13:10.877 [2024-12-09 10:04:41.558234] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62267 ] 00:13:11.135 [2024-12-09 10:04:41.746365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:11.135 [2024-12-09 10:04:41.882865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.135 [2024-12-09 10:04:41.882996] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:11.135 [2024-12-09 10:04:41.883029] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:11.136 [2024-12-09 10:04:41.883045] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:11.702 00:13:11.702 real 0m0.803s 00:13:11.702 user 0m0.556s 00:13:11.702 sys 0m0.140s 00:13:11.702 10:04:42 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:11.702 ************************************ 00:13:11.702 END TEST bdev_json_nonenclosed 00:13:11.702 ************************************ 00:13:11.702 10:04:42 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:13:11.702 10:04:42 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:11.702 10:04:42 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:13:11.702 10:04:42 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:11.702 10:04:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:13:11.702 ************************************ 00:13:11.702 START TEST bdev_json_nonarray 00:13:11.702 ************************************ 00:13:11.702 10:04:42 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:11.702 [2024-12-09 10:04:42.428959] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:13:11.702 [2024-12-09 10:04:42.429179] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62298 ] 00:13:11.960 [2024-12-09 10:04:42.624499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:12.218 [2024-12-09 10:04:42.795045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.218 [2024-12-09 10:04:42.795204] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:12.218 [2024-12-09 10:04:42.795242] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:12.218 [2024-12-09 10:04:42.795261] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:12.477 00:13:12.477 real 0m0.886s 00:13:12.477 user 0m0.599s 00:13:12.477 sys 0m0.179s 00:13:12.477 10:04:43 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:12.477 10:04:43 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:13:12.477 ************************************ 00:13:12.477 END TEST bdev_json_nonarray 00:13:12.477 ************************************ 00:13:12.477 10:04:43 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:13:12.477 10:04:43 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:13:12.477 10:04:43 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:13:12.477 10:04:43 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:13:12.477 10:04:43 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:13:12.477 10:04:43 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:12.477 10:04:43 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:12.477 10:04:43 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:13:12.477 10:04:43 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:13:12.477 10:04:43 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:13:12.477 10:04:43 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:13:12.477 ************************************ 00:13:12.477 END TEST blockdev_nvme 00:13:12.477 ************************************ 00:13:12.477 00:13:12.477 real 0m48.616s 00:13:12.477 user 1m12.612s 00:13:12.477 sys 0m8.265s 00:13:12.477 10:04:43 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:12.477 10:04:43 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:13:12.736 10:04:43 -- spdk/autotest.sh@209 -- # uname -s 00:13:12.736 10:04:43 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:13:12.736 10:04:43 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:13:12.736 10:04:43 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:12.736 10:04:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:12.736 10:04:43 -- common/autotest_common.sh@10 -- # set +x 00:13:12.736 ************************************ 00:13:12.736 START TEST blockdev_nvme_gpt 00:13:12.736 ************************************ 00:13:12.736 10:04:43 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:13:12.736 * Looking for test storage... 00:13:12.736 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:13:12.736 10:04:43 blockdev_nvme_gpt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:12.736 10:04:43 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:12.736 10:04:43 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lcov --version 00:13:12.736 10:04:43 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:12.736 10:04:43 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:12.736 10:04:43 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:12.736 10:04:43 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:12.736 10:04:43 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:13:12.736 10:04:43 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:13:12.736 10:04:43 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:13:12.736 10:04:43 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:13:12.736 10:04:43 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:13:12.736 10:04:43 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:13:12.736 10:04:43 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:13:12.736 10:04:43 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:12.736 10:04:43 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:13:12.736 10:04:43 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:13:12.736 10:04:43 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:12.736 10:04:43 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:12.736 10:04:43 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:13:12.736 10:04:43 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:13:12.736 10:04:43 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:12.736 10:04:43 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:13:12.736 10:04:43 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:13:12.736 10:04:43 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:13:12.736 10:04:43 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:13:12.736 10:04:43 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:12.736 10:04:43 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:13:12.736 10:04:43 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:13:12.736 10:04:43 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:12.736 10:04:43 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:12.736 10:04:43 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:13:12.736 10:04:43 blockdev_nvme_gpt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:12.736 10:04:43 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:12.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:12.736 --rc genhtml_branch_coverage=1 00:13:12.736 --rc genhtml_function_coverage=1 00:13:12.736 --rc genhtml_legend=1 00:13:12.736 --rc geninfo_all_blocks=1 00:13:12.736 --rc geninfo_unexecuted_blocks=1 00:13:12.736 00:13:12.736 ' 00:13:12.736 10:04:43 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:12.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:12.736 --rc genhtml_branch_coverage=1 00:13:12.736 --rc genhtml_function_coverage=1 00:13:12.736 --rc genhtml_legend=1 00:13:12.736 --rc geninfo_all_blocks=1 00:13:12.736 --rc geninfo_unexecuted_blocks=1 00:13:12.736 00:13:12.736 ' 00:13:12.736 10:04:43 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:12.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:12.736 --rc genhtml_branch_coverage=1 00:13:12.736 --rc genhtml_function_coverage=1 00:13:12.736 --rc genhtml_legend=1 00:13:12.736 --rc geninfo_all_blocks=1 00:13:12.736 --rc geninfo_unexecuted_blocks=1 00:13:12.736 00:13:12.736 ' 00:13:12.736 10:04:43 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:12.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:12.736 --rc genhtml_branch_coverage=1 00:13:12.736 --rc genhtml_function_coverage=1 00:13:12.736 --rc genhtml_legend=1 00:13:12.736 --rc geninfo_all_blocks=1 00:13:12.736 --rc geninfo_unexecuted_blocks=1 00:13:12.736 00:13:12.736 ' 00:13:12.736 10:04:43 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:13:12.736 10:04:43 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:13:12.736 10:04:43 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:13:12.736 10:04:43 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:12.736 10:04:43 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:13:12.736 10:04:43 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:13:12.736 10:04:43 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:13:12.736 10:04:43 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:13:12.736 10:04:43 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:13:12.736 10:04:43 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:13:12.736 10:04:43 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:13:12.736 10:04:43 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:13:12.736 10:04:43 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:13:12.736 10:04:43 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:13:12.736 10:04:43 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:13:12.736 10:04:43 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:13:12.736 10:04:43 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:13:12.736 10:04:43 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:13:12.736 10:04:43 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:13:12.736 10:04:43 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:13:12.736 10:04:43 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:13:12.736 10:04:43 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:13:12.736 10:04:43 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:13:12.736 10:04:43 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:13:12.736 10:04:43 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62382 00:13:12.736 10:04:43 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:13:12.736 10:04:43 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:12.736 10:04:43 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 62382 00:13:12.736 10:04:43 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 62382 ']' 00:13:12.736 10:04:43 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:12.736 10:04:43 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:12.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:12.736 10:04:43 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:12.736 10:04:43 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:12.736 10:04:43 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:12.995 [2024-12-09 10:04:43.624143] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:13:12.995 [2024-12-09 10:04:43.624340] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62382 ] 00:13:13.254 [2024-12-09 10:04:43.818979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:13.254 [2024-12-09 10:04:43.989873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.641 10:04:45 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:14.641 10:04:45 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:13:14.641 10:04:45 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:13:14.641 10:04:45 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:13:14.641 10:04:45 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:14.641 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:14.898 Waiting for block devices as requested 00:13:14.898 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:14.898 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:15.156 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:15.156 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:20.424 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:20.424 10:04:50 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:13:20.424 10:04:50 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:13:20.424 10:04:50 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:13:20.424 10:04:50 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:13:20.424 10:04:50 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:13:20.424 10:04:50 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:13:20.424 10:04:50 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:13:20.424 10:04:50 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:13:20.424 10:04:50 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:13:20.424 10:04:50 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:13:20.424 10:04:50 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:13:20.424 10:04:50 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:13:20.424 10:04:50 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:13:20.424 10:04:50 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:13:20.424 10:04:50 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:13:20.424 10:04:50 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:13:20.424 10:04:50 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:13:20.424 10:04:50 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:13:20.424 10:04:50 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:13:20.424 10:04:50 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:13:20.424 10:04:50 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:13:20.424 10:04:50 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:13:20.424 10:04:50 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:13:20.424 10:04:50 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:13:20.424 10:04:50 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:13:20.424 10:04:50 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:13:20.424 10:04:50 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:13:20.424 10:04:50 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:13:20.424 10:04:50 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:13:20.424 10:04:50 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:13:20.424 10:04:50 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:13:20.424 10:04:50 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:13:20.424 10:04:50 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:13:20.424 10:04:50 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:13:20.424 10:04:50 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:13:20.424 10:04:50 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:13:20.424 10:04:50 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:13:20.424 10:04:50 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:13:20.424 10:04:50 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:13:20.424 10:04:50 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:13:20.424 10:04:50 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:13:20.424 10:04:50 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:13:20.424 10:04:50 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:13:20.424 10:04:50 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:13:20.424 10:04:50 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:13:20.424 10:04:50 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:13:20.424 10:04:50 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:13:20.424 10:04:50 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:13:20.424 10:04:50 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:13:20.424 10:04:50 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:13:20.424 10:04:50 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:13:20.424 10:04:51 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:13:20.424 BYT; 00:13:20.424 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:13:20.424 10:04:51 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:13:20.424 BYT; 00:13:20.424 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:13:20.424 10:04:51 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:13:20.425 10:04:51 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:13:20.425 10:04:51 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:13:20.425 10:04:51 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:13:20.425 10:04:51 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:13:20.425 10:04:51 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:13:20.425 10:04:51 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:13:20.425 10:04:51 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:13:20.425 10:04:51 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:13:20.425 10:04:51 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:13:20.425 10:04:51 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:13:20.425 10:04:51 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:13:20.425 10:04:51 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:13:20.425 10:04:51 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:13:20.425 10:04:51 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:13:20.425 10:04:51 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:13:20.425 10:04:51 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:13:20.425 10:04:51 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:13:20.425 10:04:51 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:13:20.425 10:04:51 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:13:20.425 10:04:51 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:13:20.425 10:04:51 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:13:20.425 10:04:51 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:13:20.425 10:04:51 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:13:20.425 10:04:51 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:13:20.425 10:04:51 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:13:20.425 10:04:51 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:13:20.425 10:04:51 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:13:20.425 10:04:51 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:13:21.362 The operation has completed successfully. 00:13:21.362 10:04:52 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:13:22.738 The operation has completed successfully. 00:13:22.738 10:04:53 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:22.997 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:23.564 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:23.564 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:23.564 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:23.822 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:23.822 10:04:54 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:13:23.822 10:04:54 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.822 10:04:54 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:23.822 [] 00:13:23.822 10:04:54 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.822 10:04:54 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:13:23.822 10:04:54 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:13:23.822 10:04:54 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:13:23.822 10:04:54 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:23.822 10:04:54 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:13:23.822 10:04:54 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.822 10:04:54 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:24.079 10:04:54 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.079 10:04:54 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:13:24.079 10:04:54 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.079 10:04:54 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:24.079 10:04:54 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.079 10:04:54 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:13:24.079 10:04:54 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:13:24.079 10:04:54 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.079 10:04:54 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:24.079 10:04:54 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.337 10:04:54 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:13:24.337 10:04:54 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.337 10:04:54 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:24.337 10:04:54 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.337 10:04:54 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:13:24.337 10:04:54 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.337 10:04:54 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:24.337 10:04:54 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.337 10:04:54 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:13:24.337 10:04:54 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:13:24.337 10:04:54 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.337 10:04:54 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:24.337 10:04:54 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:13:24.337 10:04:55 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.337 10:04:55 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:13:24.337 10:04:55 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:13:24.338 10:04:55 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "fdaaca98-f023-49bb-beae-c6f515a30ccc"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "fdaaca98-f023-49bb-beae-c6f515a30ccc",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "ded3af38-afb1-4b8a-b2dd-f00dba35f48b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ded3af38-afb1-4b8a-b2dd-f00dba35f48b",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "aac9198d-000e-4fca-af3e-64f7cd6ee74e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "aac9198d-000e-4fca-af3e-64f7cd6ee74e",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "d5be0d90-77ac-4308-9b42-50d94c7cb73d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d5be0d90-77ac-4308-9b42-50d94c7cb73d",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "8dc0251c-549c-40f3-af8b-2553923161a8"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "8dc0251c-549c-40f3-af8b-2553923161a8",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:13:24.338 10:04:55 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:13:24.338 10:04:55 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:13:24.338 10:04:55 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:13:24.338 10:04:55 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 62382 00:13:24.338 10:04:55 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 62382 ']' 00:13:24.338 10:04:55 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 62382 00:13:24.338 10:04:55 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:13:24.338 10:04:55 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:24.338 10:04:55 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62382 00:13:24.338 killing process with pid 62382 00:13:24.338 10:04:55 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:24.338 10:04:55 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:24.338 10:04:55 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62382' 00:13:24.338 10:04:55 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 62382 00:13:24.338 10:04:55 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 62382 00:13:26.913 10:04:57 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:26.913 10:04:57 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:13:26.913 10:04:57 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:26.913 10:04:57 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:26.913 10:04:57 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:26.913 ************************************ 00:13:26.913 START TEST bdev_hello_world 00:13:26.913 ************************************ 00:13:26.913 10:04:57 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:13:27.172 [2024-12-09 10:04:57.732496] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:13:27.172 [2024-12-09 10:04:57.732705] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63025 ] 00:13:27.172 [2024-12-09 10:04:57.924393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:27.430 [2024-12-09 10:04:58.083082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.998 [2024-12-09 10:04:58.748784] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:13:27.998 [2024-12-09 10:04:58.748886] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:13:27.998 [2024-12-09 10:04:58.748941] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:13:27.998 [2024-12-09 10:04:58.752609] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:13:27.998 [2024-12-09 10:04:58.753434] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:13:27.998 [2024-12-09 10:04:58.753480] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:13:27.998 [2024-12-09 10:04:58.753655] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:13:27.998 00:13:27.998 [2024-12-09 10:04:58.753691] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:13:29.373 ************************************ 00:13:29.373 END TEST bdev_hello_world 00:13:29.373 ************************************ 00:13:29.373 00:13:29.373 real 0m2.316s 00:13:29.373 user 0m1.873s 00:13:29.373 sys 0m0.329s 00:13:29.373 10:04:59 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:29.373 10:04:59 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:13:29.373 10:04:59 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:13:29.373 10:04:59 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:29.373 10:04:59 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:29.373 10:04:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:29.373 ************************************ 00:13:29.373 START TEST bdev_bounds 00:13:29.373 ************************************ 00:13:29.373 10:04:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:13:29.373 Process bdevio pid: 63067 00:13:29.373 10:04:59 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=63067 00:13:29.373 10:04:59 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:13:29.373 10:04:59 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:29.373 10:04:59 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 63067' 00:13:29.373 10:04:59 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 63067 00:13:29.373 10:04:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 63067 ']' 00:13:29.373 10:04:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:29.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:29.373 10:04:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:29.373 10:04:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:29.373 10:04:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:29.373 10:04:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:29.373 [2024-12-09 10:05:00.109433] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:13:29.373 [2024-12-09 10:05:00.109851] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63067 ] 00:13:29.632 [2024-12-09 10:05:00.296094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:29.891 [2024-12-09 10:05:00.439539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:29.891 [2024-12-09 10:05:00.439694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.891 [2024-12-09 10:05:00.439712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:30.456 10:05:01 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:30.456 10:05:01 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:13:30.456 10:05:01 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:13:30.715 I/O targets: 00:13:30.715 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:13:30.715 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:13:30.715 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:13:30.715 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:30.715 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:30.715 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:30.715 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:13:30.715 00:13:30.715 00:13:30.715 CUnit - A unit testing framework for C - Version 2.1-3 00:13:30.715 http://cunit.sourceforge.net/ 00:13:30.715 00:13:30.715 00:13:30.715 Suite: bdevio tests on: Nvme3n1 00:13:30.715 Test: blockdev write read block ...passed 00:13:30.715 Test: blockdev write zeroes read block ...passed 00:13:30.715 Test: blockdev write zeroes read no split ...passed 00:13:30.715 Test: blockdev write zeroes read split ...passed 00:13:30.715 Test: blockdev write zeroes read split partial ...passed 00:13:30.715 Test: blockdev reset ...[2024-12-09 10:05:01.392162] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:13:30.715 [2024-12-09 10:05:01.397127] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:13:30.715 passed 00:13:30.715 Test: blockdev write read 8 blocks ...passed 00:13:30.715 Test: blockdev write read size > 128k ...passed 00:13:30.715 Test: blockdev write read invalid size ...passed 00:13:30.715 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:30.715 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:30.715 Test: blockdev write read max offset ...passed 00:13:30.715 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:30.715 Test: blockdev writev readv 8 blocks ...passed 00:13:30.715 Test: blockdev writev readv 30 x 1block ...passed 00:13:30.715 Test: blockdev writev readv block ...passed 00:13:30.715 Test: blockdev writev readv size > 128k ...passed 00:13:30.715 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:30.715 Test: blockdev comparev and writev ...[2024-12-09 10:05:01.409588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b8204000 len:0x1000 00:13:30.715 passed 00:13:30.715 Test: blockdev nvme passthru rw ...[2024-12-09 10:05:01.409955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:13:30.715 passed 00:13:30.715 Test: blockdev nvme passthru vendor specific ...[2024-12-09 10:05:01.410992] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:13:30.715 passed 00:13:30.715 Test: blockdev nvme admin passthru ...[2024-12-09 10:05:01.411146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:13:30.715 passed 00:13:30.715 Test: blockdev copy ...passed 00:13:30.715 Suite: bdevio tests on: Nvme2n3 00:13:30.715 Test: blockdev write read block ...passed 00:13:30.715 Test: blockdev write zeroes read block ...passed 00:13:30.715 Test: blockdev write zeroes read no split ...passed 00:13:30.715 Test: blockdev write zeroes read split ...passed 00:13:30.982 Test: blockdev write zeroes read split partial ...passed 00:13:30.982 Test: blockdev reset ...[2024-12-09 10:05:01.537178] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:13:30.982 passed 00:13:30.982 Test: blockdev write read 8 blocks ...[2024-12-09 10:05:01.542283] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:13:30.982 passed 00:13:30.982 Test: blockdev write read size > 128k ...passed 00:13:30.982 Test: blockdev write read invalid size ...passed 00:13:30.982 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:30.982 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:30.982 Test: blockdev write read max offset ...passed 00:13:30.982 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:30.982 Test: blockdev writev readv 8 blocks ...passed 00:13:30.982 Test: blockdev writev readv 30 x 1block ...passed 00:13:30.982 Test: blockdev writev readv block ...passed 00:13:30.982 Test: blockdev writev readv size > 128k ...passed 00:13:30.982 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:30.982 Test: blockdev comparev and writev ...[2024-12-09 10:05:01.551471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 passed 00:13:30.982 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2b8202000 len:0x1000 00:13:30.982 [2024-12-09 10:05:01.551683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:13:30.982 passed 00:13:30.983 Test: blockdev nvme passthru vendor specific ...[2024-12-09 10:05:01.552688] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:13:30.983 [2024-12-09 10:05:01.552734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:13:30.983 passed 00:13:30.983 Test: blockdev nvme admin passthru ...passed 00:13:30.983 Test: blockdev copy ...passed 00:13:30.983 Suite: bdevio tests on: Nvme2n2 00:13:30.983 Test: blockdev write read block ...passed 00:13:30.983 Test: blockdev write zeroes read block ...passed 00:13:30.983 Test: blockdev write zeroes read no split ...passed 00:13:30.983 Test: blockdev write zeroes read split ...passed 00:13:30.983 Test: blockdev write zeroes read split partial ...passed 00:13:30.983 Test: blockdev reset ...[2024-12-09 10:05:01.631963] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:13:30.983 passed 00:13:30.983 Test: blockdev write read 8 blocks ...[2024-12-09 10:05:01.637163] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:13:30.983 passed 00:13:30.983 Test: blockdev write read size > 128k ...passed 00:13:30.983 Test: blockdev write read invalid size ...passed 00:13:30.983 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:30.983 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:30.983 Test: blockdev write read max offset ...passed 00:13:30.983 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:30.983 Test: blockdev writev readv 8 blocks ...passed 00:13:30.983 Test: blockdev writev readv 30 x 1block ...passed 00:13:30.983 Test: blockdev writev readv block ...passed 00:13:30.983 Test: blockdev writev readv size > 128k ...passed 00:13:30.983 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:30.983 Test: blockdev comparev and writev ...[2024-12-09 10:05:01.646675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 passed 00:13:30.983 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2cc038000 len:0x1000 00:13:30.983 [2024-12-09 10:05:01.646890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:13:30.983 passed 00:13:30.983 Test: blockdev nvme passthru vendor specific ...passed 00:13:30.983 Test: blockdev nvme admin passthru ...[2024-12-09 10:05:01.647996] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:13:30.983 [2024-12-09 10:05:01.648044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:13:30.983 passed 00:13:30.983 Test: blockdev copy ...passed 00:13:30.983 Suite: bdevio tests on: Nvme2n1 00:13:30.983 Test: blockdev write read block ...passed 00:13:30.983 Test: blockdev write zeroes read block ...passed 00:13:30.983 Test: blockdev write zeroes read no split ...passed 00:13:30.983 Test: blockdev write zeroes read split ...passed 00:13:30.983 Test: blockdev write zeroes read split partial ...passed 00:13:30.983 Test: blockdev reset ...[2024-12-09 10:05:01.722055] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:13:30.983 [2024-12-09 10:05:01.727251] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:13:30.983 passed 00:13:30.983 Test: blockdev write read 8 blocks ...passed 00:13:30.983 Test: blockdev write read size > 128k ...passed 00:13:30.983 Test: blockdev write read invalid size ...passed 00:13:30.983 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:30.983 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:30.983 Test: blockdev write read max offset ...passed 00:13:30.983 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:30.983 Test: blockdev writev readv 8 blocks ...passed 00:13:30.983 Test: blockdev writev readv 30 x 1block ...passed 00:13:30.983 Test: blockdev writev readv block ...passed 00:13:30.983 Test: blockdev writev readv size > 128k ...passed 00:13:30.983 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:30.983 Test: blockdev comparev and writev ...[2024-12-09 10:05:01.736634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cc034000 len:0x1000 00:13:30.983 [2024-12-09 10:05:01.736711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:13:30.983 passed 00:13:30.983 Test: blockdev nvme passthru rw ...passed 00:13:30.983 Test: blockdev nvme passthru vendor specific ...[2024-12-09 10:05:01.737989] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:13:30.983 [2024-12-09 10:05:01.738039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:13:30.983 passed 00:13:30.983 Test: blockdev nvme admin passthru ...passed 00:13:30.983 Test: blockdev copy ...passed 00:13:30.983 Suite: bdevio tests on: Nvme1n1p2 00:13:30.983 Test: blockdev write read block ...passed 00:13:30.983 Test: blockdev write zeroes read block ...passed 00:13:30.983 Test: blockdev write zeroes read no split ...passed 00:13:31.265 Test: blockdev write zeroes read split ...passed 00:13:31.265 Test: blockdev write zeroes read split partial ...passed 00:13:31.265 Test: blockdev reset ...[2024-12-09 10:05:01.816117] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:13:31.265 [2024-12-09 10:05:01.820815] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:13:31.265 passed 00:13:31.265 Test: blockdev write read 8 blocks ...passed 00:13:31.265 Test: blockdev write read size > 128k ...passed 00:13:31.265 Test: blockdev write read invalid size ...passed 00:13:31.265 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:31.265 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:31.265 Test: blockdev write read max offset ...passed 00:13:31.265 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:31.265 Test: blockdev writev readv 8 blocks ...passed 00:13:31.265 Test: blockdev writev readv 30 x 1block ...passed 00:13:31.265 Test: blockdev writev readv block ...passed 00:13:31.265 Test: blockdev writev readv size > 128k ...passed 00:13:31.265 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:31.265 Test: blockdev comparev and writev ...[2024-12-09 10:05:01.830658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2cc030000 len:0x1000 00:13:31.265 [2024-12-09 10:05:01.830736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:13:31.265 passed 00:13:31.265 Test: blockdev nvme passthru rw ...passed 00:13:31.265 Test: blockdev nvme passthru vendor specific ...passed 00:13:31.265 Test: blockdev nvme admin passthru ...passed 00:13:31.265 Test: blockdev copy ...passed 00:13:31.265 Suite: bdevio tests on: Nvme1n1p1 00:13:31.265 Test: blockdev write read block ...passed 00:13:31.265 Test: blockdev write zeroes read block ...passed 00:13:31.265 Test: blockdev write zeroes read no split ...passed 00:13:31.265 Test: blockdev write zeroes read split ...passed 00:13:31.265 Test: blockdev write zeroes read split partial ...passed 00:13:31.265 Test: blockdev reset ...[2024-12-09 10:05:01.904019] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:13:31.265 passed 00:13:31.265 Test: blockdev write read 8 blocks ...[2024-12-09 10:05:01.908585] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:13:31.265 passed 00:13:31.265 Test: blockdev write read size > 128k ...passed 00:13:31.265 Test: blockdev write read invalid size ...passed 00:13:31.265 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:31.265 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:31.265 Test: blockdev write read max offset ...passed 00:13:31.265 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:31.265 Test: blockdev writev readv 8 blocks ...passed 00:13:31.265 Test: blockdev writev readv 30 x 1block ...passed 00:13:31.265 Test: blockdev writev readv block ...passed 00:13:31.265 Test: blockdev writev readv size > 128k ...passed 00:13:31.265 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:31.265 Test: blockdev comparev and writev ...[2024-12-09 10:05:01.918530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:passed 00:13:31.265 Test: blockdev nvme passthru rw ...passed 00:13:31.265 Test: blockdev nvme passthru vendor specific ...passed 00:13:31.265 Test: blockdev nvme admin passthru ...passed 00:13:31.265 Test: blockdev copy ...1 SGL DATA BLOCK ADDRESS 0x2b840e000 len:0x1000 00:13:31.265 [2024-12-09 10:05:01.918742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:13:31.265 passed 00:13:31.265 Suite: bdevio tests on: Nvme0n1 00:13:31.265 Test: blockdev write read block ...passed 00:13:31.265 Test: blockdev write zeroes read block ...passed 00:13:31.265 Test: blockdev write zeroes read no split ...passed 00:13:31.265 Test: blockdev write zeroes read split ...passed 00:13:31.265 Test: blockdev write zeroes read split partial ...passed 00:13:31.265 Test: blockdev reset ...[2024-12-09 10:05:01.988905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:13:31.265 [2024-12-09 10:05:01.993545] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller spassed 00:13:31.265 Test: blockdev write read 8 blocks ...uccessful. 00:13:31.265 passed 00:13:31.265 Test: blockdev write read size > 128k ...passed 00:13:31.265 Test: blockdev write read invalid size ...passed 00:13:31.265 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:31.265 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:31.265 Test: blockdev write read max offset ...passed 00:13:31.265 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:31.265 Test: blockdev writev readv 8 blocks ...passed 00:13:31.265 Test: blockdev writev readv 30 x 1block ...passed 00:13:31.265 Test: blockdev writev readv block ...passed 00:13:31.265 Test: blockdev writev readv size > 128k ...passed 00:13:31.265 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:31.265 Test: blockdev comparev and writev ...passed 00:13:31.265 Test: blockdev nvme passthru rw ...[2024-12-09 10:05:02.002067] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:13:31.265 separate metadata which is not supported yet. 00:13:31.265 passed 00:13:31.265 Test: blockdev nvme passthru vendor specific ...[2024-12-09 10:05:02.002849] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:13:31.265 [2024-12-09 10:05:02.002903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:13:31.265 passed 00:13:31.265 Test: blockdev nvme admin passthru ...passed 00:13:31.265 Test: blockdev copy ...passed 00:13:31.265 00:13:31.265 Run Summary: Type Total Ran Passed Failed Inactive 00:13:31.265 suites 7 7 n/a 0 0 00:13:31.265 tests 161 161 161 0 0 00:13:31.265 asserts 1025 1025 1025 0 n/a 00:13:31.265 00:13:31.265 Elapsed time = 1.834 seconds 00:13:31.265 0 00:13:31.265 10:05:02 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 63067 00:13:31.265 10:05:02 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 63067 ']' 00:13:31.265 10:05:02 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 63067 00:13:31.265 10:05:02 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:13:31.265 10:05:02 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:31.265 10:05:02 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63067 00:13:31.524 10:05:02 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:31.524 killing process with pid 63067 00:13:31.524 10:05:02 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:31.524 10:05:02 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63067' 00:13:31.524 10:05:02 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 63067 00:13:31.524 10:05:02 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 63067 00:13:32.458 10:05:03 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:13:32.458 00:13:32.458 real 0m3.242s 00:13:32.458 user 0m8.139s 00:13:32.458 sys 0m0.509s 00:13:32.458 10:05:03 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:32.458 ************************************ 00:13:32.458 END TEST bdev_bounds 00:13:32.458 ************************************ 00:13:32.458 10:05:03 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:32.717 10:05:03 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:13:32.717 10:05:03 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:32.717 10:05:03 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:32.717 10:05:03 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:32.717 ************************************ 00:13:32.717 START TEST bdev_nbd 00:13:32.717 ************************************ 00:13:32.717 10:05:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:13:32.717 10:05:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:13:32.717 10:05:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:13:32.717 10:05:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:32.717 10:05:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:32.717 10:05:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:13:32.717 10:05:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:13:32.717 10:05:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:13:32.717 10:05:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:13:32.717 10:05:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:32.717 10:05:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:13:32.717 10:05:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:13:32.717 10:05:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:13:32.717 10:05:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:13:32.717 10:05:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:13:32.717 10:05:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:13:32.717 10:05:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=63139 00:13:32.717 10:05:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:13:32.717 10:05:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 63139 /var/tmp/spdk-nbd.sock 00:13:32.717 10:05:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:32.717 10:05:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 63139 ']' 00:13:32.717 10:05:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:32.717 10:05:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:32.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:32.717 10:05:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:32.717 10:05:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:32.717 10:05:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:32.717 [2024-12-09 10:05:03.420414] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:13:32.717 [2024-12-09 10:05:03.421329] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:32.975 [2024-12-09 10:05:03.613416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.975 [2024-12-09 10:05:03.762574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.910 10:05:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:33.910 10:05:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:13:33.910 10:05:04 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:13:33.910 10:05:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:33.910 10:05:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:13:33.910 10:05:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:13:33.910 10:05:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:13:33.910 10:05:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:33.910 10:05:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:13:33.910 10:05:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:13:33.910 10:05:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:13:33.910 10:05:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:13:33.910 10:05:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:13:33.910 10:05:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:13:33.910 10:05:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:13:34.169 10:05:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:13:34.169 10:05:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:13:34.169 10:05:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:13:34.169 10:05:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:34.169 10:05:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:34.169 10:05:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:34.169 10:05:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:34.169 10:05:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:34.169 10:05:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:34.169 10:05:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:34.169 10:05:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:34.169 10:05:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:34.169 1+0 records in 00:13:34.169 1+0 records out 00:13:34.169 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00049252 s, 8.3 MB/s 00:13:34.169 10:05:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.169 10:05:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:34.169 10:05:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.169 10:05:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:34.169 10:05:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:34.169 10:05:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:34.169 10:05:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:13:34.169 10:05:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:13:34.438 10:05:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:13:34.438 10:05:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:13:34.438 10:05:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:13:34.438 10:05:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:34.438 10:05:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:34.438 10:05:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:34.438 10:05:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:34.438 10:05:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:34.438 10:05:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:34.438 10:05:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:34.438 10:05:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:34.438 10:05:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:34.438 1+0 records in 00:13:34.438 1+0 records out 00:13:34.438 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000670692 s, 6.1 MB/s 00:13:34.438 10:05:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.438 10:05:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:34.438 10:05:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.438 10:05:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:34.438 10:05:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:34.438 10:05:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:34.438 10:05:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:13:34.438 10:05:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:13:35.005 10:05:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:13:35.005 10:05:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:13:35.005 10:05:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:13:35.006 10:05:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:13:35.006 10:05:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:35.006 10:05:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:35.006 10:05:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:35.006 10:05:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:13:35.006 10:05:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:35.006 10:05:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:35.006 10:05:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:35.006 10:05:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:35.006 1+0 records in 00:13:35.006 1+0 records out 00:13:35.006 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000622495 s, 6.6 MB/s 00:13:35.006 10:05:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.006 10:05:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:35.006 10:05:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.006 10:05:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:35.006 10:05:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:35.006 10:05:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:35.006 10:05:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:13:35.006 10:05:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:13:35.264 10:05:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:13:35.264 10:05:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:13:35.264 10:05:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:13:35.264 10:05:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:13:35.264 10:05:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:35.264 10:05:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:35.264 10:05:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:35.264 10:05:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:13:35.264 10:05:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:35.264 10:05:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:35.264 10:05:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:35.264 10:05:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:35.264 1+0 records in 00:13:35.264 1+0 records out 00:13:35.264 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000729097 s, 5.6 MB/s 00:13:35.264 10:05:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.264 10:05:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:35.264 10:05:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.264 10:05:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:35.264 10:05:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:35.264 10:05:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:35.264 10:05:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:13:35.264 10:05:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:13:35.523 10:05:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:13:35.523 10:05:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:13:35.523 10:05:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:13:35.523 10:05:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:13:35.523 10:05:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:35.523 10:05:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:35.523 10:05:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:35.523 10:05:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:13:35.523 10:05:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:35.523 10:05:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:35.523 10:05:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:35.523 10:05:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:35.523 1+0 records in 00:13:35.523 1+0 records out 00:13:35.523 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000657833 s, 6.2 MB/s 00:13:35.523 10:05:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.523 10:05:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:35.523 10:05:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.523 10:05:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:35.523 10:05:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:35.523 10:05:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:35.523 10:05:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:13:35.523 10:05:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:13:35.782 10:05:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:13:35.782 10:05:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:13:35.782 10:05:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:13:35.782 10:05:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:13:35.782 10:05:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:35.782 10:05:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:35.782 10:05:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:35.782 10:05:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:13:35.782 10:05:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:35.782 10:05:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:35.782 10:05:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:35.782 10:05:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:35.782 1+0 records in 00:13:35.782 1+0 records out 00:13:35.782 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000736997 s, 5.6 MB/s 00:13:35.782 10:05:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.782 10:05:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:35.782 10:05:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.782 10:05:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:35.782 10:05:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:35.782 10:05:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:35.782 10:05:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:13:35.782 10:05:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:13:36.041 10:05:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:13:36.041 10:05:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:13:36.299 10:05:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:13:36.299 10:05:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:13:36.299 10:05:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:36.299 10:05:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:36.299 10:05:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:36.299 10:05:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:13:36.299 10:05:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:36.299 10:05:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:36.299 10:05:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:36.300 10:05:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:36.300 1+0 records in 00:13:36.300 1+0 records out 00:13:36.300 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000844289 s, 4.9 MB/s 00:13:36.300 10:05:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:36.300 10:05:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:36.300 10:05:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:36.300 10:05:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:36.300 10:05:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:36.300 10:05:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:36.300 10:05:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:13:36.300 10:05:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:36.558 10:05:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:13:36.559 { 00:13:36.559 "nbd_device": "/dev/nbd0", 00:13:36.559 "bdev_name": "Nvme0n1" 00:13:36.559 }, 00:13:36.559 { 00:13:36.559 "nbd_device": "/dev/nbd1", 00:13:36.559 "bdev_name": "Nvme1n1p1" 00:13:36.559 }, 00:13:36.559 { 00:13:36.559 "nbd_device": "/dev/nbd2", 00:13:36.559 "bdev_name": "Nvme1n1p2" 00:13:36.559 }, 00:13:36.559 { 00:13:36.559 "nbd_device": "/dev/nbd3", 00:13:36.559 "bdev_name": "Nvme2n1" 00:13:36.559 }, 00:13:36.559 { 00:13:36.559 "nbd_device": "/dev/nbd4", 00:13:36.559 "bdev_name": "Nvme2n2" 00:13:36.559 }, 00:13:36.559 { 00:13:36.559 "nbd_device": "/dev/nbd5", 00:13:36.559 "bdev_name": "Nvme2n3" 00:13:36.559 }, 00:13:36.559 { 00:13:36.559 "nbd_device": "/dev/nbd6", 00:13:36.559 "bdev_name": "Nvme3n1" 00:13:36.559 } 00:13:36.559 ]' 00:13:36.559 10:05:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:13:36.559 10:05:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:13:36.559 { 00:13:36.559 "nbd_device": "/dev/nbd0", 00:13:36.559 "bdev_name": "Nvme0n1" 00:13:36.559 }, 00:13:36.559 { 00:13:36.559 "nbd_device": "/dev/nbd1", 00:13:36.559 "bdev_name": "Nvme1n1p1" 00:13:36.559 }, 00:13:36.559 { 00:13:36.559 "nbd_device": "/dev/nbd2", 00:13:36.559 "bdev_name": "Nvme1n1p2" 00:13:36.559 }, 00:13:36.559 { 00:13:36.559 "nbd_device": "/dev/nbd3", 00:13:36.559 "bdev_name": "Nvme2n1" 00:13:36.559 }, 00:13:36.559 { 00:13:36.559 "nbd_device": "/dev/nbd4", 00:13:36.559 "bdev_name": "Nvme2n2" 00:13:36.559 }, 00:13:36.559 { 00:13:36.559 "nbd_device": "/dev/nbd5", 00:13:36.559 "bdev_name": "Nvme2n3" 00:13:36.559 }, 00:13:36.559 { 00:13:36.559 "nbd_device": "/dev/nbd6", 00:13:36.559 "bdev_name": "Nvme3n1" 00:13:36.559 } 00:13:36.559 ]' 00:13:36.559 10:05:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:13:36.559 10:05:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:13:36.559 10:05:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:36.559 10:05:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:13:36.559 10:05:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:36.559 10:05:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:36.559 10:05:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:36.559 10:05:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:36.818 10:05:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:36.818 10:05:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:36.818 10:05:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:36.818 10:05:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:36.818 10:05:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:36.818 10:05:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:36.818 10:05:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:36.818 10:05:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:36.818 10:05:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:36.818 10:05:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:37.076 10:05:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:37.076 10:05:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:37.076 10:05:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:37.076 10:05:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:37.077 10:05:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:37.077 10:05:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:37.077 10:05:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:37.077 10:05:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:37.077 10:05:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:37.077 10:05:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:13:37.335 10:05:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:13:37.335 10:05:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:13:37.335 10:05:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:13:37.335 10:05:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:37.335 10:05:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:37.335 10:05:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:13:37.335 10:05:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:37.335 10:05:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:37.335 10:05:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:37.335 10:05:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:13:37.592 10:05:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:13:37.592 10:05:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:13:37.592 10:05:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:13:37.592 10:05:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:37.592 10:05:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:37.592 10:05:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:13:37.592 10:05:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:37.592 10:05:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:37.592 10:05:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:37.592 10:05:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:13:38.158 10:05:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:13:38.158 10:05:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:13:38.158 10:05:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:13:38.158 10:05:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:38.158 10:05:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:38.158 10:05:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:13:38.158 10:05:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:38.158 10:05:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:38.158 10:05:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:38.158 10:05:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:13:38.158 10:05:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:13:38.417 10:05:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:13:38.417 10:05:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:13:38.417 10:05:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:38.417 10:05:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:38.417 10:05:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:13:38.417 10:05:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:38.417 10:05:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:38.417 10:05:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:38.417 10:05:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:13:38.675 10:05:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:13:38.675 10:05:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:13:38.675 10:05:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:13:38.675 10:05:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:38.675 10:05:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:38.675 10:05:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:13:38.675 10:05:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:38.675 10:05:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:38.675 10:05:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:38.675 10:05:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:38.675 10:05:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:38.934 10:05:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:38.934 10:05:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:38.934 10:05:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:38.934 10:05:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:38.934 10:05:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:38.934 10:05:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:38.934 10:05:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:38.934 10:05:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:38.934 10:05:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:38.934 10:05:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:13:38.934 10:05:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:13:38.934 10:05:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:13:38.934 10:05:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:13:38.934 10:05:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:38.934 10:05:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:13:38.934 10:05:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:13:38.934 10:05:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:13:38.934 10:05:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:13:38.934 10:05:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:13:38.934 10:05:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:38.934 10:05:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:13:38.934 10:05:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:38.934 10:05:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:13:38.934 10:05:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:38.934 10:05:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:13:38.934 10:05:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:38.934 10:05:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:13:38.934 10:05:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:13:39.193 /dev/nbd0 00:13:39.451 10:05:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:39.451 10:05:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:39.451 10:05:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:39.451 10:05:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:39.451 10:05:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:39.451 10:05:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:39.451 10:05:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:39.451 10:05:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:39.451 10:05:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:39.451 10:05:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:39.451 10:05:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:39.451 1+0 records in 00:13:39.451 1+0 records out 00:13:39.451 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000775888 s, 5.3 MB/s 00:13:39.451 10:05:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:39.451 10:05:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:39.451 10:05:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:39.451 10:05:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:39.451 10:05:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:39.451 10:05:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:39.451 10:05:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:13:39.451 10:05:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:13:39.711 /dev/nbd1 00:13:39.711 10:05:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:39.711 10:05:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:39.711 10:05:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:39.711 10:05:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:39.711 10:05:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:39.711 10:05:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:39.711 10:05:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:39.711 10:05:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:39.711 10:05:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:39.711 10:05:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:39.711 10:05:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:39.711 1+0 records in 00:13:39.711 1+0 records out 00:13:39.711 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00048287 s, 8.5 MB/s 00:13:39.711 10:05:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:39.711 10:05:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:39.711 10:05:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:39.711 10:05:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:39.711 10:05:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:39.711 10:05:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:39.711 10:05:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:13:39.711 10:05:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:13:39.970 /dev/nbd10 00:13:39.970 10:05:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:13:39.970 10:05:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:13:39.970 10:05:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:13:39.970 10:05:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:39.970 10:05:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:39.970 10:05:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:39.970 10:05:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:13:39.970 10:05:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:39.970 10:05:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:39.970 10:05:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:39.970 10:05:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:39.970 1+0 records in 00:13:39.970 1+0 records out 00:13:39.970 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000647034 s, 6.3 MB/s 00:13:39.970 10:05:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:39.970 10:05:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:39.970 10:05:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:39.970 10:05:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:39.970 10:05:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:39.970 10:05:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:39.970 10:05:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:13:39.970 10:05:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:13:40.228 /dev/nbd11 00:13:40.228 10:05:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:13:40.228 10:05:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:13:40.228 10:05:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:13:40.228 10:05:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:40.228 10:05:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:40.228 10:05:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:40.228 10:05:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:13:40.228 10:05:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:40.228 10:05:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:40.228 10:05:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:40.228 10:05:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:40.228 1+0 records in 00:13:40.228 1+0 records out 00:13:40.228 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000672864 s, 6.1 MB/s 00:13:40.228 10:05:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:40.228 10:05:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:40.228 10:05:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:40.228 10:05:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:40.228 10:05:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:40.228 10:05:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:40.228 10:05:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:13:40.228 10:05:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:13:40.486 /dev/nbd12 00:13:40.486 10:05:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:13:40.486 10:05:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:13:40.486 10:05:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:13:40.486 10:05:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:40.486 10:05:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:40.486 10:05:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:40.486 10:05:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:13:40.486 10:05:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:40.486 10:05:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:40.486 10:05:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:40.486 10:05:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:40.486 1+0 records in 00:13:40.486 1+0 records out 00:13:40.486 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000549194 s, 7.5 MB/s 00:13:40.486 10:05:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:40.486 10:05:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:40.486 10:05:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:40.486 10:05:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:40.486 10:05:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:40.486 10:05:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:40.486 10:05:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:13:40.486 10:05:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:13:41.079 /dev/nbd13 00:13:41.079 10:05:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:13:41.079 10:05:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:13:41.079 10:05:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:13:41.079 10:05:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:41.079 10:05:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:41.079 10:05:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:41.079 10:05:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:13:41.079 10:05:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:41.079 10:05:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:41.079 10:05:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:41.079 10:05:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:41.079 1+0 records in 00:13:41.079 1+0 records out 00:13:41.079 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000638978 s, 6.4 MB/s 00:13:41.079 10:05:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:41.079 10:05:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:41.079 10:05:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:41.079 10:05:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:41.079 10:05:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:41.079 10:05:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:41.079 10:05:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:13:41.079 10:05:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:13:41.363 /dev/nbd14 00:13:41.363 10:05:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:13:41.363 10:05:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:13:41.363 10:05:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:13:41.363 10:05:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:41.363 10:05:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:41.363 10:05:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:41.363 10:05:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:13:41.363 10:05:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:41.363 10:05:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:41.363 10:05:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:41.363 10:05:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:41.363 1+0 records in 00:13:41.363 1+0 records out 00:13:41.363 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000790639 s, 5.2 MB/s 00:13:41.363 10:05:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:41.363 10:05:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:41.363 10:05:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:41.363 10:05:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:41.363 10:05:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:41.363 10:05:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:41.363 10:05:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:13:41.363 10:05:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:41.364 10:05:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:41.364 10:05:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:41.623 10:05:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:41.623 { 00:13:41.623 "nbd_device": "/dev/nbd0", 00:13:41.623 "bdev_name": "Nvme0n1" 00:13:41.623 }, 00:13:41.623 { 00:13:41.623 "nbd_device": "/dev/nbd1", 00:13:41.623 "bdev_name": "Nvme1n1p1" 00:13:41.623 }, 00:13:41.623 { 00:13:41.623 "nbd_device": "/dev/nbd10", 00:13:41.623 "bdev_name": "Nvme1n1p2" 00:13:41.623 }, 00:13:41.623 { 00:13:41.623 "nbd_device": "/dev/nbd11", 00:13:41.623 "bdev_name": "Nvme2n1" 00:13:41.623 }, 00:13:41.623 { 00:13:41.623 "nbd_device": "/dev/nbd12", 00:13:41.623 "bdev_name": "Nvme2n2" 00:13:41.623 }, 00:13:41.623 { 00:13:41.623 "nbd_device": "/dev/nbd13", 00:13:41.623 "bdev_name": "Nvme2n3" 00:13:41.623 }, 00:13:41.623 { 00:13:41.623 "nbd_device": "/dev/nbd14", 00:13:41.623 "bdev_name": "Nvme3n1" 00:13:41.623 } 00:13:41.623 ]' 00:13:41.623 10:05:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:41.623 10:05:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:41.623 { 00:13:41.623 "nbd_device": "/dev/nbd0", 00:13:41.623 "bdev_name": "Nvme0n1" 00:13:41.623 }, 00:13:41.623 { 00:13:41.623 "nbd_device": "/dev/nbd1", 00:13:41.623 "bdev_name": "Nvme1n1p1" 00:13:41.623 }, 00:13:41.623 { 00:13:41.623 "nbd_device": "/dev/nbd10", 00:13:41.623 "bdev_name": "Nvme1n1p2" 00:13:41.623 }, 00:13:41.623 { 00:13:41.623 "nbd_device": "/dev/nbd11", 00:13:41.623 "bdev_name": "Nvme2n1" 00:13:41.623 }, 00:13:41.623 { 00:13:41.623 "nbd_device": "/dev/nbd12", 00:13:41.623 "bdev_name": "Nvme2n2" 00:13:41.623 }, 00:13:41.623 { 00:13:41.623 "nbd_device": "/dev/nbd13", 00:13:41.623 "bdev_name": "Nvme2n3" 00:13:41.623 }, 00:13:41.623 { 00:13:41.623 "nbd_device": "/dev/nbd14", 00:13:41.623 "bdev_name": "Nvme3n1" 00:13:41.623 } 00:13:41.623 ]' 00:13:41.623 10:05:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:41.623 /dev/nbd1 00:13:41.623 /dev/nbd10 00:13:41.623 /dev/nbd11 00:13:41.623 /dev/nbd12 00:13:41.623 /dev/nbd13 00:13:41.623 /dev/nbd14' 00:13:41.623 10:05:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:41.623 /dev/nbd1 00:13:41.623 /dev/nbd10 00:13:41.623 /dev/nbd11 00:13:41.623 /dev/nbd12 00:13:41.623 /dev/nbd13 00:13:41.623 /dev/nbd14' 00:13:41.623 10:05:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:41.623 10:05:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:13:41.623 10:05:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:13:41.623 10:05:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:13:41.623 10:05:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:13:41.623 10:05:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:13:41.623 10:05:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:13:41.623 10:05:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:41.623 10:05:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:41.624 10:05:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:41.624 10:05:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:41.624 10:05:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:13:41.624 256+0 records in 00:13:41.624 256+0 records out 00:13:41.624 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0110058 s, 95.3 MB/s 00:13:41.624 10:05:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:41.624 10:05:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:41.882 256+0 records in 00:13:41.882 256+0 records out 00:13:41.882 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.177095 s, 5.9 MB/s 00:13:41.882 10:05:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:41.882 10:05:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:42.140 256+0 records in 00:13:42.140 256+0 records out 00:13:42.140 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.178439 s, 5.9 MB/s 00:13:42.140 10:05:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:42.140 10:05:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:13:42.140 256+0 records in 00:13:42.140 256+0 records out 00:13:42.140 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.181762 s, 5.8 MB/s 00:13:42.140 10:05:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:42.140 10:05:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:13:42.399 256+0 records in 00:13:42.399 256+0 records out 00:13:42.399 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.170673 s, 6.1 MB/s 00:13:42.399 10:05:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:42.399 10:05:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:13:42.657 256+0 records in 00:13:42.657 256+0 records out 00:13:42.657 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.174044 s, 6.0 MB/s 00:13:42.657 10:05:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:42.657 10:05:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:13:42.657 256+0 records in 00:13:42.657 256+0 records out 00:13:42.657 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.167878 s, 6.2 MB/s 00:13:42.657 10:05:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:42.657 10:05:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:13:42.916 256+0 records in 00:13:42.916 256+0 records out 00:13:42.916 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.167309 s, 6.3 MB/s 00:13:42.916 10:05:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:13:42.916 10:05:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:13:42.916 10:05:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:42.916 10:05:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:42.916 10:05:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:42.916 10:05:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:42.916 10:05:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:42.916 10:05:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:42.916 10:05:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:13:42.916 10:05:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:42.916 10:05:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:13:42.916 10:05:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:42.916 10:05:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:13:42.916 10:05:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:42.916 10:05:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:13:42.916 10:05:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:42.916 10:05:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:13:42.916 10:05:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:42.916 10:05:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:13:42.916 10:05:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:42.916 10:05:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:13:42.916 10:05:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:42.916 10:05:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:13:42.916 10:05:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:42.916 10:05:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:13:42.916 10:05:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:42.916 10:05:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:42.916 10:05:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:42.916 10:05:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:43.483 10:05:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:43.483 10:05:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:43.483 10:05:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:43.483 10:05:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:43.483 10:05:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:43.483 10:05:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:43.483 10:05:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:43.483 10:05:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:43.483 10:05:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:43.483 10:05:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:43.742 10:05:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:43.742 10:05:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:43.742 10:05:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:43.742 10:05:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:43.742 10:05:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:43.742 10:05:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:43.742 10:05:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:43.742 10:05:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:43.742 10:05:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:43.742 10:05:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:13:44.000 10:05:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:13:44.000 10:05:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:13:44.000 10:05:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:13:44.000 10:05:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:44.000 10:05:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:44.000 10:05:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:13:44.000 10:05:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:44.000 10:05:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:44.000 10:05:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:44.000 10:05:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:13:44.259 10:05:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:13:44.259 10:05:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:13:44.259 10:05:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:13:44.259 10:05:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:44.259 10:05:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:44.259 10:05:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:44.259 10:05:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:44.259 10:05:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:44.259 10:05:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:44.259 10:05:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:13:44.526 10:05:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:13:44.526 10:05:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:13:44.526 10:05:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:13:44.526 10:05:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:44.526 10:05:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:44.526 10:05:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:13:44.526 10:05:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:44.526 10:05:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:44.526 10:05:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:44.526 10:05:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:13:44.783 10:05:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:13:44.783 10:05:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:13:44.783 10:05:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:13:44.783 10:05:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:44.783 10:05:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:44.783 10:05:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:13:44.783 10:05:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:44.783 10:05:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:44.783 10:05:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:44.783 10:05:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:13:45.351 10:05:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:13:45.351 10:05:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:13:45.351 10:05:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:13:45.351 10:05:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:45.351 10:05:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:45.351 10:05:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:13:45.351 10:05:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:45.351 10:05:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:45.351 10:05:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:45.351 10:05:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:45.351 10:05:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:45.351 10:05:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:45.351 10:05:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:45.351 10:05:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:45.610 10:05:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:45.610 10:05:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:45.610 10:05:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:45.610 10:05:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:45.610 10:05:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:45.610 10:05:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:45.610 10:05:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:13:45.610 10:05:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:45.610 10:05:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:13:45.610 10:05:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:45.610 10:05:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:45.610 10:05:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:13:45.610 10:05:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:13:45.886 malloc_lvol_verify 00:13:45.886 10:05:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:13:46.169 da7d3ad0-622b-4148-a0f3-926730da339b 00:13:46.169 10:05:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:13:46.427 48902879-ce21-4e58-a6f3-ba2cf9e26ff3 00:13:46.427 10:05:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:13:46.686 /dev/nbd0 00:13:46.686 10:05:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:13:46.686 10:05:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:13:46.686 10:05:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:13:46.686 10:05:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:13:46.686 10:05:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:13:46.686 mke2fs 1.47.0 (5-Feb-2023) 00:13:46.686 Discarding device blocks: 0/4096 done 00:13:46.686 Creating filesystem with 4096 1k blocks and 1024 inodes 00:13:46.686 00:13:46.686 Allocating group tables: 0/1 done 00:13:46.686 Writing inode tables: 0/1 done 00:13:46.686 Creating journal (1024 blocks): done 00:13:46.686 Writing superblocks and filesystem accounting information: 0/1 done 00:13:46.686 00:13:46.686 10:05:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:46.686 10:05:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:46.686 10:05:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:46.686 10:05:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:46.686 10:05:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:46.686 10:05:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:46.686 10:05:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:47.253 10:05:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:47.253 10:05:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:47.253 10:05:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:47.253 10:05:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:47.253 10:05:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:47.253 10:05:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:47.253 10:05:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:47.253 10:05:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:47.253 10:05:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 63139 00:13:47.253 10:05:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 63139 ']' 00:13:47.253 10:05:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 63139 00:13:47.253 10:05:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:13:47.253 10:05:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:47.253 10:05:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63139 00:13:47.253 killing process with pid 63139 00:13:47.253 10:05:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:47.253 10:05:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:47.253 10:05:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63139' 00:13:47.253 10:05:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 63139 00:13:47.253 10:05:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 63139 00:13:48.629 ************************************ 00:13:48.629 END TEST bdev_nbd 00:13:48.629 ************************************ 00:13:48.629 10:05:19 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:13:48.629 00:13:48.629 real 0m15.765s 00:13:48.629 user 0m22.340s 00:13:48.629 sys 0m5.151s 00:13:48.629 10:05:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:48.629 10:05:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:48.629 skipping fio tests on NVMe due to multi-ns failures. 00:13:48.629 10:05:19 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:13:48.629 10:05:19 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:13:48.629 10:05:19 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:13:48.629 10:05:19 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:13:48.629 10:05:19 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:48.629 10:05:19 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:48.629 10:05:19 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:13:48.629 10:05:19 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:48.629 10:05:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:48.629 ************************************ 00:13:48.629 START TEST bdev_verify 00:13:48.629 ************************************ 00:13:48.629 10:05:19 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:48.629 [2024-12-09 10:05:19.237745] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:13:48.629 [2024-12-09 10:05:19.237954] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63599 ] 00:13:48.887 [2024-12-09 10:05:19.426299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:48.887 [2024-12-09 10:05:19.577310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:48.887 [2024-12-09 10:05:19.577328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:49.821 Running I/O for 5 seconds... 00:13:52.132 14400.00 IOPS, 56.25 MiB/s [2024-12-09T10:05:23.890Z] 14752.00 IOPS, 57.62 MiB/s [2024-12-09T10:05:24.825Z] 15040.00 IOPS, 58.75 MiB/s [2024-12-09T10:05:25.773Z] 15152.00 IOPS, 59.19 MiB/s [2024-12-09T10:05:25.773Z] 15155.20 IOPS, 59.20 MiB/s 00:13:54.976 Latency(us) 00:13:54.976 [2024-12-09T10:05:25.773Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:54.976 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:54.976 Verification LBA range: start 0x0 length 0xbd0bd 00:13:54.976 Nvme0n1 : 5.07 1085.13 4.24 0.00 0.00 117613.20 26452.71 94848.47 00:13:54.976 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:54.976 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:13:54.976 Nvme0n1 : 5.07 1034.78 4.04 0.00 0.00 123087.82 30027.40 113436.86 00:13:54.976 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:54.976 Verification LBA range: start 0x0 length 0x4ff80 00:13:54.976 Nvme1n1p1 : 5.08 1083.99 4.23 0.00 0.00 117523.81 28478.37 90558.84 00:13:54.976 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:54.976 Verification LBA range: start 0x4ff80 length 0x4ff80 00:13:54.976 Nvme1n1p1 : 5.10 1040.82 4.07 0.00 0.00 122025.54 15252.01 116773.24 00:13:54.976 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:54.976 Verification LBA range: start 0x0 length 0x4ff7f 00:13:54.976 Nvme1n1p2 : 5.08 1083.32 4.23 0.00 0.00 117302.77 30742.34 87222.46 00:13:54.976 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:54.976 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:13:54.976 Nvme1n1p2 : 5.11 1040.40 4.06 0.00 0.00 121831.89 14656.23 119632.99 00:13:54.976 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:54.976 Verification LBA range: start 0x0 length 0x80000 00:13:54.976 Nvme2n1 : 5.08 1082.85 4.23 0.00 0.00 117121.35 33840.41 88652.33 00:13:54.976 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:54.976 Verification LBA range: start 0x80000 length 0x80000 00:13:54.976 Nvme2n1 : 5.11 1040.02 4.06 0.00 0.00 121638.32 13464.67 117249.86 00:13:54.976 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:54.976 Verification LBA range: start 0x0 length 0x80000 00:13:54.976 Nvme2n2 : 5.09 1082.39 4.23 0.00 0.00 116953.81 32648.84 92941.96 00:13:54.976 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:54.976 Verification LBA range: start 0x80000 length 0x80000 00:13:54.976 Nvme2n2 : 5.11 1039.63 4.06 0.00 0.00 121411.58 13107.20 115819.99 00:13:54.976 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:54.976 Verification LBA range: start 0x0 length 0x80000 00:13:54.977 Nvme2n3 : 5.09 1081.88 4.23 0.00 0.00 116796.93 28478.37 95325.09 00:13:54.977 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:54.977 Verification LBA range: start 0x80000 length 0x80000 00:13:54.977 Nvme2n3 : 5.11 1039.23 4.06 0.00 0.00 121231.99 13226.36 114390.11 00:13:54.977 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:54.977 Verification LBA range: start 0x0 length 0x20000 00:13:54.977 Nvme3n1 : 5.10 1091.70 4.26 0.00 0.00 115676.73 6166.34 96754.97 00:13:54.977 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:54.977 Verification LBA range: start 0x20000 length 0x20000 00:13:54.977 Nvme3n1 : 5.12 1049.46 4.10 0.00 0.00 120212.06 7804.74 113436.86 00:13:54.977 [2024-12-09T10:05:25.774Z] =================================================================================================================== 00:13:54.977 [2024-12-09T10:05:25.774Z] Total : 14875.60 58.11 0.00 0.00 119269.66 6166.34 119632.99 00:13:56.354 00:13:56.354 real 0m7.808s 00:13:56.354 user 0m14.174s 00:13:56.354 sys 0m0.375s 00:13:56.354 10:05:26 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:56.354 10:05:26 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:13:56.354 ************************************ 00:13:56.354 END TEST bdev_verify 00:13:56.354 ************************************ 00:13:56.354 10:05:26 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:56.354 10:05:26 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:13:56.354 10:05:26 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:56.354 10:05:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:56.354 ************************************ 00:13:56.354 START TEST bdev_verify_big_io 00:13:56.354 ************************************ 00:13:56.354 10:05:26 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:56.354 [2024-12-09 10:05:27.077058] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:13:56.354 [2024-12-09 10:05:27.077270] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63710 ] 00:13:56.613 [2024-12-09 10:05:27.255273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:56.613 [2024-12-09 10:05:27.402853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:56.613 [2024-12-09 10:05:27.402900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:57.550 Running I/O for 5 seconds... 00:14:03.363 1233.00 IOPS, 77.06 MiB/s [2024-12-09T10:05:34.419Z] 2902.00 IOPS, 181.38 MiB/s [2024-12-09T10:05:34.419Z] 3348.67 IOPS, 209.29 MiB/s 00:14:03.622 Latency(us) 00:14:03.622 [2024-12-09T10:05:34.419Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:03.622 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:03.622 Verification LBA range: start 0x0 length 0xbd0b 00:14:03.622 Nvme0n1 : 5.67 123.97 7.75 0.00 0.00 995011.74 38368.35 983754.94 00:14:03.622 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:03.622 Verification LBA range: start 0xbd0b length 0xbd0b 00:14:03.622 Nvme0n1 : 5.72 112.11 7.01 0.00 0.00 1087306.65 25141.99 1021884.97 00:14:03.622 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:03.622 Verification LBA range: start 0x0 length 0x4ff8 00:14:03.622 Nvme1n1p1 : 5.63 124.73 7.80 0.00 0.00 973621.22 100091.35 945624.90 00:14:03.622 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:03.623 Verification LBA range: start 0x4ff8 length 0x4ff8 00:14:03.623 Nvme1n1p1 : 5.77 116.55 7.28 0.00 0.00 1027670.44 95325.09 861738.82 00:14:03.623 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:03.623 Verification LBA range: start 0x0 length 0x4ff7 00:14:03.623 Nvme1n1p2 : 5.71 129.33 8.08 0.00 0.00 926557.95 41704.73 957063.91 00:14:03.623 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:03.623 Verification LBA range: start 0x4ff7 length 0x4ff7 00:14:03.623 Nvme1n1p2 : 5.77 119.80 7.49 0.00 0.00 982831.00 45041.11 991380.95 00:14:03.623 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:03.623 Verification LBA range: start 0x0 length 0x8000 00:14:03.623 Nvme2n1 : 5.72 128.78 8.05 0.00 0.00 909978.21 41704.73 1174405.12 00:14:03.623 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:03.623 Verification LBA range: start 0x8000 length 0x8000 00:14:03.623 Nvme2n1 : 5.80 118.87 7.43 0.00 0.00 958063.34 45994.36 1342177.28 00:14:03.623 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:03.623 Verification LBA range: start 0x0 length 0x8000 00:14:03.623 Nvme2n2 : 5.73 134.48 8.40 0.00 0.00 862506.68 39559.91 1189657.13 00:14:03.623 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:03.623 Verification LBA range: start 0x8000 length 0x8000 00:14:03.623 Nvme2n2 : 5.83 120.85 7.55 0.00 0.00 923091.82 24903.68 1677721.60 00:14:03.623 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:03.623 Verification LBA range: start 0x0 length 0x8000 00:14:03.623 Nvme2n3 : 5.72 134.09 8.38 0.00 0.00 846981.30 38130.04 1197283.14 00:14:03.623 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:03.623 Verification LBA range: start 0x8000 length 0x8000 00:14:03.623 Nvme2n3 : 5.87 132.64 8.29 0.00 0.00 827779.72 17635.14 1998013.91 00:14:03.623 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:03.623 Verification LBA range: start 0x0 length 0x2000 00:14:03.623 Nvme3n1 : 5.73 144.92 9.06 0.00 0.00 770182.30 4885.41 1067641.02 00:14:03.623 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:03.623 Verification LBA range: start 0x2000 length 0x2000 00:14:03.623 Nvme3n1 : 5.89 148.73 9.30 0.00 0.00 723832.04 2576.76 1784485.70 00:14:03.623 [2024-12-09T10:05:34.420Z] =================================================================================================================== 00:14:03.623 [2024-12-09T10:05:34.420Z] Total : 1789.86 111.87 0.00 0.00 907464.61 2576.76 1998013.91 00:14:05.561 00:14:05.561 real 0m9.293s 00:14:05.561 user 0m17.065s 00:14:05.561 sys 0m0.427s 00:14:05.561 10:05:36 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:05.561 10:05:36 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.561 ************************************ 00:14:05.561 END TEST bdev_verify_big_io 00:14:05.561 ************************************ 00:14:05.561 10:05:36 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:05.561 10:05:36 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:14:05.561 10:05:36 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:05.561 10:05:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:14:05.561 ************************************ 00:14:05.561 START TEST bdev_write_zeroes 00:14:05.561 ************************************ 00:14:05.561 10:05:36 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:05.819 [2024-12-09 10:05:36.432758] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:14:05.819 [2024-12-09 10:05:36.432975] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63825 ] 00:14:06.078 [2024-12-09 10:05:36.623737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:06.078 [2024-12-09 10:05:36.790474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.011 Running I/O for 1 seconds... 00:14:07.945 52864.00 IOPS, 206.50 MiB/s 00:14:07.945 Latency(us) 00:14:07.945 [2024-12-09T10:05:38.742Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:07.945 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:07.945 Nvme0n1 : 1.03 7526.31 29.40 0.00 0.00 16962.31 14417.92 27405.96 00:14:07.945 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:07.945 Nvme1n1p1 : 1.03 7516.38 29.36 0.00 0.00 16954.45 14120.03 26929.34 00:14:07.945 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:07.945 Nvme1n1p2 : 1.03 7506.42 29.32 0.00 0.00 16925.59 14179.61 26214.40 00:14:07.945 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:07.945 Nvme2n1 : 1.03 7497.45 29.29 0.00 0.00 16867.30 10724.07 25261.15 00:14:07.946 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:07.946 Nvme2n2 : 1.03 7488.59 29.25 0.00 0.00 16848.97 9830.40 24665.37 00:14:07.946 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:07.946 Nvme2n3 : 1.04 7479.54 29.22 0.00 0.00 16818.73 8698.41 25856.93 00:14:07.946 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:07.946 Nvme3n1 : 1.04 7470.58 29.18 0.00 0.00 16809.01 7804.74 27525.12 00:14:07.946 [2024-12-09T10:05:38.743Z] =================================================================================================================== 00:14:07.946 [2024-12-09T10:05:38.743Z] Total : 52485.28 205.02 0.00 0.00 16883.77 7804.74 27525.12 00:14:09.322 00:14:09.322 real 0m3.525s 00:14:09.322 user 0m3.062s 00:14:09.322 sys 0m0.338s 00:14:09.322 10:05:39 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:09.322 ************************************ 00:14:09.322 END TEST bdev_write_zeroes 00:14:09.322 ************************************ 00:14:09.322 10:05:39 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:14:09.322 10:05:39 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:09.322 10:05:39 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:14:09.322 10:05:39 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:09.322 10:05:39 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:14:09.322 ************************************ 00:14:09.322 START TEST bdev_json_nonenclosed 00:14:09.322 ************************************ 00:14:09.322 10:05:39 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:09.322 [2024-12-09 10:05:40.001804] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:14:09.322 [2024-12-09 10:05:40.002053] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63883 ] 00:14:09.580 [2024-12-09 10:05:40.191431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:09.580 [2024-12-09 10:05:40.362372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.580 [2024-12-09 10:05:40.362548] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:14:09.580 [2024-12-09 10:05:40.362594] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:09.580 [2024-12-09 10:05:40.362611] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:10.147 00:14:10.147 real 0m0.889s 00:14:10.147 user 0m0.618s 00:14:10.147 sys 0m0.160s 00:14:10.147 10:05:40 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:10.147 10:05:40 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:14:10.147 ************************************ 00:14:10.147 END TEST bdev_json_nonenclosed 00:14:10.147 ************************************ 00:14:10.147 10:05:40 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:10.147 10:05:40 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:14:10.147 10:05:40 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:10.147 10:05:40 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:14:10.147 ************************************ 00:14:10.147 START TEST bdev_json_nonarray 00:14:10.147 ************************************ 00:14:10.147 10:05:40 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:10.147 [2024-12-09 10:05:40.924422] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:14:10.147 [2024-12-09 10:05:40.924584] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63909 ] 00:14:10.405 [2024-12-09 10:05:41.099387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.663 [2024-12-09 10:05:41.243114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:10.663 [2024-12-09 10:05:41.243277] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:14:10.663 [2024-12-09 10:05:41.243310] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:10.663 [2024-12-09 10:05:41.243326] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:10.922 00:14:10.922 real 0m0.792s 00:14:10.922 user 0m0.536s 00:14:10.922 sys 0m0.152s 00:14:10.922 10:05:41 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:10.922 10:05:41 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:14:10.922 ************************************ 00:14:10.922 END TEST bdev_json_nonarray 00:14:10.922 ************************************ 00:14:10.922 10:05:41 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:14:10.922 10:05:41 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:14:10.922 10:05:41 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:14:10.922 10:05:41 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:10.922 10:05:41 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:10.922 10:05:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:14:10.922 ************************************ 00:14:10.922 START TEST bdev_gpt_uuid 00:14:10.922 ************************************ 00:14:10.922 10:05:41 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:14:10.922 10:05:41 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:14:10.922 10:05:41 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:14:10.922 10:05:41 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=63940 00:14:10.922 10:05:41 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:14:10.922 10:05:41 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 63940 00:14:10.922 10:05:41 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:14:10.922 10:05:41 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 63940 ']' 00:14:10.922 10:05:41 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.922 10:05:41 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:10.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.922 10:05:41 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.922 10:05:41 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:10.922 10:05:41 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:14:11.181 [2024-12-09 10:05:41.809688] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:14:11.181 [2024-12-09 10:05:41.809940] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63940 ] 00:14:11.440 [2024-12-09 10:05:41.998307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.440 [2024-12-09 10:05:42.146493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.375 10:05:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:12.375 10:05:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:14:12.375 10:05:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:12.375 10:05:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.375 10:05:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:14:12.633 Some configs were skipped because the RPC state that can call them passed over. 00:14:12.633 10:05:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.634 10:05:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:14:12.634 10:05:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.634 10:05:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:14:12.634 10:05:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.892 10:05:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:14:12.892 10:05:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.892 10:05:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:14:12.892 10:05:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.892 10:05:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:14:12.892 { 00:14:12.892 "name": "Nvme1n1p1", 00:14:12.892 "aliases": [ 00:14:12.892 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:14:12.892 ], 00:14:12.892 "product_name": "GPT Disk", 00:14:12.892 "block_size": 4096, 00:14:12.892 "num_blocks": 655104, 00:14:12.892 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:14:12.892 "assigned_rate_limits": { 00:14:12.892 "rw_ios_per_sec": 0, 00:14:12.892 "rw_mbytes_per_sec": 0, 00:14:12.892 "r_mbytes_per_sec": 0, 00:14:12.892 "w_mbytes_per_sec": 0 00:14:12.892 }, 00:14:12.892 "claimed": false, 00:14:12.892 "zoned": false, 00:14:12.892 "supported_io_types": { 00:14:12.892 "read": true, 00:14:12.892 "write": true, 00:14:12.892 "unmap": true, 00:14:12.892 "flush": true, 00:14:12.892 "reset": true, 00:14:12.892 "nvme_admin": false, 00:14:12.892 "nvme_io": false, 00:14:12.892 "nvme_io_md": false, 00:14:12.892 "write_zeroes": true, 00:14:12.892 "zcopy": false, 00:14:12.892 "get_zone_info": false, 00:14:12.892 "zone_management": false, 00:14:12.892 "zone_append": false, 00:14:12.892 "compare": true, 00:14:12.892 "compare_and_write": false, 00:14:12.892 "abort": true, 00:14:12.892 "seek_hole": false, 00:14:12.892 "seek_data": false, 00:14:12.892 "copy": true, 00:14:12.892 "nvme_iov_md": false 00:14:12.892 }, 00:14:12.892 "driver_specific": { 00:14:12.892 "gpt": { 00:14:12.892 "base_bdev": "Nvme1n1", 00:14:12.892 "offset_blocks": 256, 00:14:12.892 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:14:12.892 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:14:12.892 "partition_name": "SPDK_TEST_first" 00:14:12.892 } 00:14:12.892 } 00:14:12.892 } 00:14:12.892 ]' 00:14:12.892 10:05:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:14:12.892 10:05:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:14:12.892 10:05:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:14:12.892 10:05:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:14:12.892 10:05:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:14:12.892 10:05:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:14:12.892 10:05:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:14:12.892 10:05:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.892 10:05:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:14:12.892 10:05:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.892 10:05:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:14:12.892 { 00:14:12.892 "name": "Nvme1n1p2", 00:14:12.892 "aliases": [ 00:14:12.892 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:14:12.892 ], 00:14:12.892 "product_name": "GPT Disk", 00:14:12.892 "block_size": 4096, 00:14:12.892 "num_blocks": 655103, 00:14:12.892 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:14:12.892 "assigned_rate_limits": { 00:14:12.892 "rw_ios_per_sec": 0, 00:14:12.892 "rw_mbytes_per_sec": 0, 00:14:12.892 "r_mbytes_per_sec": 0, 00:14:12.892 "w_mbytes_per_sec": 0 00:14:12.892 }, 00:14:12.892 "claimed": false, 00:14:12.892 "zoned": false, 00:14:12.892 "supported_io_types": { 00:14:12.892 "read": true, 00:14:12.892 "write": true, 00:14:12.892 "unmap": true, 00:14:12.892 "flush": true, 00:14:12.892 "reset": true, 00:14:12.892 "nvme_admin": false, 00:14:12.892 "nvme_io": false, 00:14:12.892 "nvme_io_md": false, 00:14:12.892 "write_zeroes": true, 00:14:12.892 "zcopy": false, 00:14:12.892 "get_zone_info": false, 00:14:12.892 "zone_management": false, 00:14:12.892 "zone_append": false, 00:14:12.892 "compare": true, 00:14:12.892 "compare_and_write": false, 00:14:12.892 "abort": true, 00:14:12.892 "seek_hole": false, 00:14:12.892 "seek_data": false, 00:14:12.892 "copy": true, 00:14:12.892 "nvme_iov_md": false 00:14:12.892 }, 00:14:12.892 "driver_specific": { 00:14:12.892 "gpt": { 00:14:12.892 "base_bdev": "Nvme1n1", 00:14:12.892 "offset_blocks": 655360, 00:14:12.892 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:14:12.892 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:14:12.892 "partition_name": "SPDK_TEST_second" 00:14:12.892 } 00:14:12.892 } 00:14:12.892 } 00:14:12.892 ]' 00:14:12.892 10:05:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:14:12.892 10:05:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:14:12.892 10:05:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:14:13.151 10:05:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:14:13.151 10:05:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:14:13.151 10:05:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:14:13.151 10:05:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 63940 00:14:13.151 10:05:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 63940 ']' 00:14:13.151 10:05:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 63940 00:14:13.151 10:05:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:14:13.151 10:05:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:13.151 10:05:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63940 00:14:13.151 killing process with pid 63940 00:14:13.151 10:05:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:13.151 10:05:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:13.151 10:05:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63940' 00:14:13.151 10:05:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 63940 00:14:13.151 10:05:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 63940 00:14:15.683 ************************************ 00:14:15.683 END TEST bdev_gpt_uuid 00:14:15.683 ************************************ 00:14:15.683 00:14:15.683 real 0m4.494s 00:14:15.683 user 0m4.648s 00:14:15.683 sys 0m0.687s 00:14:15.683 10:05:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:15.683 10:05:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:14:15.683 10:05:46 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:14:15.683 10:05:46 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:14:15.683 10:05:46 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:14:15.683 10:05:46 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:14:15.683 10:05:46 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:15.683 10:05:46 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:14:15.683 10:05:46 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:14:15.683 10:05:46 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:14:15.683 10:05:46 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:15.942 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:16.201 Waiting for block devices as requested 00:14:16.201 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:14:16.201 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:14:16.201 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:14:16.469 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:14:21.768 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:14:21.768 10:05:52 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:14:21.768 10:05:52 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:14:21.768 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:14:21.768 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:14:21.768 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:14:21.768 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:14:21.768 10:05:52 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:14:21.768 00:14:21.768 real 1m9.130s 00:14:21.768 user 1m28.009s 00:14:21.768 sys 0m11.588s 00:14:21.768 10:05:52 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:21.768 10:05:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:14:21.768 ************************************ 00:14:21.768 END TEST blockdev_nvme_gpt 00:14:21.768 ************************************ 00:14:21.768 10:05:52 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:14:21.768 10:05:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:21.768 10:05:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:21.768 10:05:52 -- common/autotest_common.sh@10 -- # set +x 00:14:21.768 ************************************ 00:14:21.768 START TEST nvme 00:14:21.768 ************************************ 00:14:21.768 10:05:52 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:14:21.768 * Looking for test storage... 00:14:21.768 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:14:21.768 10:05:52 nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:21.768 10:05:52 nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:14:21.768 10:05:52 nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:22.026 10:05:52 nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:22.027 10:05:52 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:22.027 10:05:52 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:22.027 10:05:52 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:22.027 10:05:52 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:14:22.027 10:05:52 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:14:22.027 10:05:52 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:14:22.027 10:05:52 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:14:22.027 10:05:52 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:14:22.027 10:05:52 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:14:22.027 10:05:52 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:14:22.027 10:05:52 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:22.027 10:05:52 nvme -- scripts/common.sh@344 -- # case "$op" in 00:14:22.027 10:05:52 nvme -- scripts/common.sh@345 -- # : 1 00:14:22.027 10:05:52 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:22.027 10:05:52 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:22.027 10:05:52 nvme -- scripts/common.sh@365 -- # decimal 1 00:14:22.027 10:05:52 nvme -- scripts/common.sh@353 -- # local d=1 00:14:22.027 10:05:52 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:22.027 10:05:52 nvme -- scripts/common.sh@355 -- # echo 1 00:14:22.027 10:05:52 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:14:22.027 10:05:52 nvme -- scripts/common.sh@366 -- # decimal 2 00:14:22.027 10:05:52 nvme -- scripts/common.sh@353 -- # local d=2 00:14:22.027 10:05:52 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:22.027 10:05:52 nvme -- scripts/common.sh@355 -- # echo 2 00:14:22.027 10:05:52 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:14:22.027 10:05:52 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:22.027 10:05:52 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:22.027 10:05:52 nvme -- scripts/common.sh@368 -- # return 0 00:14:22.027 10:05:52 nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:22.027 10:05:52 nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:22.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:22.027 --rc genhtml_branch_coverage=1 00:14:22.027 --rc genhtml_function_coverage=1 00:14:22.027 --rc genhtml_legend=1 00:14:22.027 --rc geninfo_all_blocks=1 00:14:22.027 --rc geninfo_unexecuted_blocks=1 00:14:22.027 00:14:22.027 ' 00:14:22.027 10:05:52 nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:22.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:22.027 --rc genhtml_branch_coverage=1 00:14:22.027 --rc genhtml_function_coverage=1 00:14:22.027 --rc genhtml_legend=1 00:14:22.027 --rc geninfo_all_blocks=1 00:14:22.027 --rc geninfo_unexecuted_blocks=1 00:14:22.027 00:14:22.027 ' 00:14:22.027 10:05:52 nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:22.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:22.027 --rc genhtml_branch_coverage=1 00:14:22.027 --rc genhtml_function_coverage=1 00:14:22.027 --rc genhtml_legend=1 00:14:22.027 --rc geninfo_all_blocks=1 00:14:22.027 --rc geninfo_unexecuted_blocks=1 00:14:22.027 00:14:22.027 ' 00:14:22.027 10:05:52 nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:22.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:22.027 --rc genhtml_branch_coverage=1 00:14:22.027 --rc genhtml_function_coverage=1 00:14:22.027 --rc genhtml_legend=1 00:14:22.027 --rc geninfo_all_blocks=1 00:14:22.027 --rc geninfo_unexecuted_blocks=1 00:14:22.027 00:14:22.027 ' 00:14:22.027 10:05:52 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:22.593 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:23.158 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:14:23.158 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:23.158 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:23.158 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:14:23.158 10:05:53 nvme -- nvme/nvme.sh@79 -- # uname 00:14:23.158 10:05:53 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:14:23.158 10:05:53 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:14:23.158 10:05:53 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:14:23.159 10:05:53 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:14:23.159 10:05:53 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:14:23.159 10:05:53 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:14:23.159 Waiting for stub to ready for secondary processes... 00:14:23.159 10:05:53 nvme -- common/autotest_common.sh@1075 -- # stubpid=64599 00:14:23.159 10:05:53 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:14:23.159 10:05:53 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:14:23.159 10:05:53 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:14:23.159 10:05:53 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64599 ]] 00:14:23.159 10:05:53 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:14:23.417 [2024-12-09 10:05:53.985442] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:14:23.417 [2024-12-09 10:05:53.985662] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:14:24.351 10:05:54 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:14:24.351 10:05:54 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64599 ]] 00:14:24.351 10:05:54 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:14:24.609 [2024-12-09 10:05:55.403890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:24.867 [2024-12-09 10:05:55.529005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:24.867 [2024-12-09 10:05:55.529133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:24.867 [2024-12-09 10:05:55.529152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:24.867 [2024-12-09 10:05:55.549427] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:14:24.867 [2024-12-09 10:05:55.549483] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:14:24.867 [2024-12-09 10:05:55.560812] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:14:24.867 [2024-12-09 10:05:55.561124] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:14:24.867 [2024-12-09 10:05:55.563410] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:14:24.867 [2024-12-09 10:05:55.563663] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:14:24.867 [2024-12-09 10:05:55.563766] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:14:24.867 [2024-12-09 10:05:55.568448] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:14:24.867 [2024-12-09 10:05:55.568962] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:14:24.867 [2024-12-09 10:05:55.569148] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:14:24.867 [2024-12-09 10:05:55.573119] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:14:24.867 [2024-12-09 10:05:55.573533] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:14:24.868 [2024-12-09 10:05:55.573689] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:14:24.868 [2024-12-09 10:05:55.573802] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:14:24.868 [2024-12-09 10:05:55.573910] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:14:25.433 done. 00:14:25.433 10:05:55 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:14:25.434 10:05:55 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:14:25.434 10:05:55 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:14:25.434 10:05:55 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:14:25.434 10:05:55 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:25.434 10:05:55 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:25.434 ************************************ 00:14:25.434 START TEST nvme_reset 00:14:25.434 ************************************ 00:14:25.434 10:05:55 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:14:25.692 Initializing NVMe Controllers 00:14:25.692 Skipping QEMU NVMe SSD at 0000:00:10.0 00:14:25.692 Skipping QEMU NVMe SSD at 0000:00:11.0 00:14:25.692 Skipping QEMU NVMe SSD at 0000:00:13.0 00:14:25.692 Skipping QEMU NVMe SSD at 0000:00:12.0 00:14:25.692 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:14:25.692 00:14:25.692 real 0m0.415s 00:14:25.692 user 0m0.199s 00:14:25.692 sys 0m0.162s 00:14:25.692 10:05:56 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:25.692 10:05:56 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:14:25.692 ************************************ 00:14:25.692 END TEST nvme_reset 00:14:25.692 ************************************ 00:14:25.692 10:05:56 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:14:25.692 10:05:56 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:25.692 10:05:56 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:25.692 10:05:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:25.692 ************************************ 00:14:25.692 START TEST nvme_identify 00:14:25.692 ************************************ 00:14:25.692 10:05:56 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:14:25.692 10:05:56 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:14:25.692 10:05:56 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:14:25.692 10:05:56 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:14:25.692 10:05:56 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:14:25.692 10:05:56 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:14:25.692 10:05:56 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:14:25.692 10:05:56 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:14:25.692 10:05:56 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:25.692 10:05:56 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:14:25.692 10:05:56 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:14:25.692 10:05:56 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:14:25.692 10:05:56 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:14:26.261 [2024-12-09 10:05:56.792297] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 64629 terminated unexpected 00:14:26.261 ===================================================== 00:14:26.261 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:26.261 ===================================================== 00:14:26.261 Controller Capabilities/Features 00:14:26.261 ================================ 00:14:26.261 Vendor ID: 1b36 00:14:26.261 Subsystem Vendor ID: 1af4 00:14:26.261 Serial Number: 12340 00:14:26.261 Model Number: QEMU NVMe Ctrl 00:14:26.261 Firmware Version: 8.0.0 00:14:26.261 Recommended Arb Burst: 6 00:14:26.261 IEEE OUI Identifier: 00 54 52 00:14:26.261 Multi-path I/O 00:14:26.261 May have multiple subsystem ports: No 00:14:26.261 May have multiple controllers: No 00:14:26.261 Associated with SR-IOV VF: No 00:14:26.261 Max Data Transfer Size: 524288 00:14:26.261 Max Number of Namespaces: 256 00:14:26.261 Max Number of I/O Queues: 64 00:14:26.261 NVMe Specification Version (VS): 1.4 00:14:26.261 NVMe Specification Version (Identify): 1.4 00:14:26.261 Maximum Queue Entries: 2048 00:14:26.261 Contiguous Queues Required: Yes 00:14:26.261 Arbitration Mechanisms Supported 00:14:26.261 Weighted Round Robin: Not Supported 00:14:26.261 Vendor Specific: Not Supported 00:14:26.261 Reset Timeout: 7500 ms 00:14:26.261 Doorbell Stride: 4 bytes 00:14:26.261 NVM Subsystem Reset: Not Supported 00:14:26.261 Command Sets Supported 00:14:26.261 NVM Command Set: Supported 00:14:26.261 Boot Partition: Not Supported 00:14:26.261 Memory Page Size Minimum: 4096 bytes 00:14:26.261 Memory Page Size Maximum: 65536 bytes 00:14:26.261 Persistent Memory Region: Not Supported 00:14:26.261 Optional Asynchronous Events Supported 00:14:26.261 Namespace Attribute Notices: Supported 00:14:26.261 Firmware Activation Notices: Not Supported 00:14:26.261 ANA Change Notices: Not Supported 00:14:26.261 PLE Aggregate Log Change Notices: Not Supported 00:14:26.261 LBA Status Info Alert Notices: Not Supported 00:14:26.261 EGE Aggregate Log Change Notices: Not Supported 00:14:26.261 Normal NVM Subsystem Shutdown event: Not Supported 00:14:26.261 Zone Descriptor Change Notices: Not Supported 00:14:26.261 Discovery Log Change Notices: Not Supported 00:14:26.261 Controller Attributes 00:14:26.261 128-bit Host Identifier: Not Supported 00:14:26.261 Non-Operational Permissive Mode: Not Supported 00:14:26.261 NVM Sets: Not Supported 00:14:26.261 Read Recovery Levels: Not Supported 00:14:26.261 Endurance Groups: Not Supported 00:14:26.261 Predictable Latency Mode: Not Supported 00:14:26.261 Traffic Based Keep ALive: Not Supported 00:14:26.261 Namespace Granularity: Not Supported 00:14:26.261 SQ Associations: Not Supported 00:14:26.261 UUID List: Not Supported 00:14:26.261 Multi-Domain Subsystem: Not Supported 00:14:26.261 Fixed Capacity Management: Not Supported 00:14:26.261 Variable Capacity Management: Not Supported 00:14:26.261 Delete Endurance Group: Not Supported 00:14:26.261 Delete NVM Set: Not Supported 00:14:26.261 Extended LBA Formats Supported: Supported 00:14:26.261 Flexible Data Placement Supported: Not Supported 00:14:26.261 00:14:26.261 Controller Memory Buffer Support 00:14:26.261 ================================ 00:14:26.261 Supported: No 00:14:26.261 00:14:26.261 Persistent Memory Region Support 00:14:26.261 ================================ 00:14:26.261 Supported: No 00:14:26.261 00:14:26.261 Admin Command Set Attributes 00:14:26.261 ============================ 00:14:26.261 Security Send/Receive: Not Supported 00:14:26.261 Format NVM: Supported 00:14:26.261 Firmware Activate/Download: Not Supported 00:14:26.261 Namespace Management: Supported 00:14:26.261 Device Self-Test: Not Supported 00:14:26.261 Directives: Supported 00:14:26.261 NVMe-MI: Not Supported 00:14:26.261 Virtualization Management: Not Supported 00:14:26.261 Doorbell Buffer Config: Supported 00:14:26.261 Get LBA Status Capability: Not Supported 00:14:26.261 Command & Feature Lockdown Capability: Not Supported 00:14:26.261 Abort Command Limit: 4 00:14:26.261 Async Event Request Limit: 4 00:14:26.261 Number of Firmware Slots: N/A 00:14:26.261 Firmware Slot 1 Read-Only: N/A 00:14:26.261 Firmware Activation Without Reset: N/A 00:14:26.261 Multiple Update Detection Support: N/A 00:14:26.261 Firmware Update Granularity: No Information Provided 00:14:26.261 Per-Namespace SMART Log: Yes 00:14:26.261 Asymmetric Namespace Access Log Page: Not Supported 00:14:26.261 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:14:26.261 Command Effects Log Page: Supported 00:14:26.261 Get Log Page Extended Data: Supported 00:14:26.261 Telemetry Log Pages: Not Supported 00:14:26.261 Persistent Event Log Pages: Not Supported 00:14:26.261 Supported Log Pages Log Page: May Support 00:14:26.261 Commands Supported & Effects Log Page: Not Supported 00:14:26.261 Feature Identifiers & Effects Log Page:May Support 00:14:26.261 NVMe-MI Commands & Effects Log Page: May Support 00:14:26.261 Data Area 4 for Telemetry Log: Not Supported 00:14:26.261 Error Log Page Entries Supported: 1 00:14:26.261 Keep Alive: Not Supported 00:14:26.261 00:14:26.261 NVM Command Set Attributes 00:14:26.261 ========================== 00:14:26.261 Submission Queue Entry Size 00:14:26.261 Max: 64 00:14:26.261 Min: 64 00:14:26.261 Completion Queue Entry Size 00:14:26.261 Max: 16 00:14:26.261 Min: 16 00:14:26.261 Number of Namespaces: 256 00:14:26.261 Compare Command: Supported 00:14:26.261 Write Uncorrectable Command: Not Supported 00:14:26.261 Dataset Management Command: Supported 00:14:26.261 Write Zeroes Command: Supported 00:14:26.261 Set Features Save Field: Supported 00:14:26.261 Reservations: Not Supported 00:14:26.261 Timestamp: Supported 00:14:26.261 Copy: Supported 00:14:26.261 Volatile Write Cache: Present 00:14:26.261 Atomic Write Unit (Normal): 1 00:14:26.261 Atomic Write Unit (PFail): 1 00:14:26.261 Atomic Compare & Write Unit: 1 00:14:26.261 Fused Compare & Write: Not Supported 00:14:26.261 Scatter-Gather List 00:14:26.261 SGL Command Set: Supported 00:14:26.261 SGL Keyed: Not Supported 00:14:26.261 SGL Bit Bucket Descriptor: Not Supported 00:14:26.261 SGL Metadata Pointer: Not Supported 00:14:26.261 Oversized SGL: Not Supported 00:14:26.261 SGL Metadata Address: Not Supported 00:14:26.261 SGL Offset: Not Supported 00:14:26.261 Transport SGL Data Block: Not Supported 00:14:26.261 Replay Protected Memory Block: Not Supported 00:14:26.261 00:14:26.261 Firmware Slot Information 00:14:26.261 ========================= 00:14:26.261 Active slot: 1 00:14:26.261 Slot 1 Firmware Revision: 1.0 00:14:26.261 00:14:26.261 00:14:26.261 Commands Supported and Effects 00:14:26.261 ============================== 00:14:26.261 Admin Commands 00:14:26.261 -------------- 00:14:26.261 Delete I/O Submission Queue (00h): Supported 00:14:26.261 Create I/O Submission Queue (01h): Supported 00:14:26.261 Get Log Page (02h): Supported 00:14:26.261 Delete I/O Completion Queue (04h): Supported 00:14:26.261 Create I/O Completion Queue (05h): Supported 00:14:26.261 Identify (06h): Supported 00:14:26.261 Abort (08h): Supported 00:14:26.261 Set Features (09h): Supported 00:14:26.261 Get Features (0Ah): Supported 00:14:26.261 Asynchronous Event Request (0Ch): Supported 00:14:26.261 Namespace Attachment (15h): Supported NS-Inventory-Change 00:14:26.261 Directive Send (19h): Supported 00:14:26.261 Directive Receive (1Ah): Supported 00:14:26.261 Virtualization Management (1Ch): Supported 00:14:26.261 Doorbell Buffer Config (7Ch): Supported 00:14:26.261 Format NVM (80h): Supported LBA-Change 00:14:26.261 I/O Commands 00:14:26.261 ------------ 00:14:26.262 Flush (00h): Supported LBA-Change 00:14:26.262 Write (01h): Supported LBA-Change 00:14:26.262 Read (02h): Supported 00:14:26.262 Compare (05h): Supported 00:14:26.262 Write Zeroes (08h): Supported LBA-Change 00:14:26.262 Dataset Management (09h): Supported LBA-Change 00:14:26.262 Unknown (0Ch): Supported 00:14:26.262 Unknown (12h): Supported 00:14:26.262 Copy (19h): Supported LBA-Change 00:14:26.262 Unknown (1Dh): Supported LBA-Change 00:14:26.262 00:14:26.262 Error Log 00:14:26.262 ========= 00:14:26.262 00:14:26.262 Arbitration 00:14:26.262 =========== 00:14:26.262 Arbitration Burst: no limit 00:14:26.262 00:14:26.262 Power Management 00:14:26.262 ================ 00:14:26.262 Number of Power States: 1 00:14:26.262 Current Power State: Power State #0 00:14:26.262 Power State #0: 00:14:26.262 Max Power: 25.00 W 00:14:26.262 Non-Operational State: Operational 00:14:26.262 Entry Latency: 16 microseconds 00:14:26.262 Exit Latency: 4 microseconds 00:14:26.262 Relative Read Throughput: 0 00:14:26.262 Relative Read Latency: 0 00:14:26.262 Relative Write Throughput: 0 00:14:26.262 Relative Write Latency: 0 00:14:26.262 Idle Power: Not Reported 00:14:26.262 Active Power: Not Reported 00:14:26.262 Non-Operational Permissive Mode: Not Supported 00:14:26.262 00:14:26.262 Health Information 00:14:26.262 ================== 00:14:26.262 Critical Warnings: 00:14:26.262 Available Spare Space: OK 00:14:26.262 Temperature: OK 00:14:26.262 Device Reliability: OK 00:14:26.262 Read Only: No 00:14:26.262 Volatile Memory Backup: OK 00:14:26.262 Current Temperature: 323 Kelvin (50 Celsius) 00:14:26.262 Temperature Threshold: 343 Kelvin (70 Celsius) 00:14:26.262 Available Spare: 0% 00:14:26.262 Available Spare Threshold: 0% 00:14:26.262 Life Percentage Used: 0% 00:14:26.262 Data Units Read: 646 00:14:26.262 Data Units Written: 574 00:14:26.262 Host Read Commands: 28464 00:14:26.262 Host Write Commands: 28250 00:14:26.262 Controller Busy Time: 0 minutes 00:14:26.262 Power Cycles: 0 00:14:26.262 Power On Hours: 0 hours 00:14:26.262 Unsafe Shutdowns: 0 00:14:26.262 Unrecoverable Media Errors: 0 00:14:26.262 Lifetime Error Log Entries: 0 00:14:26.262 Warning Temperature Time: 0 minutes 00:14:26.262 Critical Temperature Time: 0 minutes 00:14:26.262 00:14:26.262 Number of Queues 00:14:26.262 ================ 00:14:26.262 Number of I/O Submission Queues: 64 00:14:26.262 Number of I/O Completion Queues: 64 00:14:26.262 00:14:26.262 ZNS Specific Controller Data 00:14:26.262 ============================ 00:14:26.262 Zone Append Size Limit: 0 00:14:26.262 00:14:26.262 00:14:26.262 Active Namespaces 00:14:26.262 ================= 00:14:26.262 Namespace ID:1 00:14:26.262 Error Recovery Timeout: Unlimited 00:14:26.262 Command Set Identifier: NVM (00h) 00:14:26.262 Deallocate: Supported 00:14:26.262 Deallocated/Unwritten Error: Supported 00:14:26.262 Deallocated Read Value: All 0x00 00:14:26.262 Deallocate in Write Zeroes: Not Supported 00:14:26.262 Deallocated Guard Field: 0xFFFF 00:14:26.262 Flush: Supported 00:14:26.262 Reservation: Not Supported 00:14:26.262 Metadata Transferred as: Separate Metadata Buffer 00:14:26.262 Namespace Sharing Capabilities: Private 00:14:26.262 Size (in LBAs): 1548666 (5GiB) 00:14:26.262 Capacity (in LBAs): 1548666 (5GiB) 00:14:26.262 Utilization (in LBAs): 1548666 (5GiB) 00:14:26.262 Thin Provisioning: Not Supported 00:14:26.262 Per-NS Atomic Units: No 00:14:26.262 Maximum Single Source Range Length: 128 00:14:26.262 Maximum Copy Length: 128 00:14:26.262 Maximum Source Range Count: 128 00:14:26.262 NGUID/EUI64 Never Reused: No 00:14:26.262 Namespace Write Protected: No 00:14:26.262 Number of LBA Formats: 8 00:14:26.262 Current LBA Format: LBA Format #07 00:14:26.262 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:26.262 LBA Format #01: Data Size: 512 Metadata Size: 8 00:14:26.262 LBA Format #02: Data Size: 512 Metadata Size: 16 00:14:26.262 LBA Format #03: Data Size: 512 Metadata Size: 64 00:14:26.262 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:14:26.262 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:14:26.262 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:14:26.262 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:14:26.262 00:14:26.262 NVM Specific Namespace Data 00:14:26.262 =========================== 00:14:26.262 Logical Block Storage Tag Mask: 0 00:14:26.262 Protection Information Capabilities: 00:14:26.262 16b Guard Protection Information Storage Tag Support: No 00:14:26.262 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:14:26.262 Storage Tag Check Read Support: No 00:14:26.262 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:26.262 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:26.262 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:26.262 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:26.262 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:26.262 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:26.262 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:26.262 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:26.262 ===================================================== 00:14:26.262 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:14:26.262 ===================================================== 00:14:26.262 Controller Capabilities/Features 00:14:26.262 ================================ 00:14:26.262 Vendor ID: 1b36 00:14:26.262 Subsystem Vendor ID: 1af4 00:14:26.262 Serial Number: 12341 00:14:26.262 Model Number: QEMU NVMe Ctrl 00:14:26.262 Firmware Version: 8.0.0 00:14:26.262 Recommended Arb Burst: 6 00:14:26.262 IEEE OUI Identifier: 00 54 52 00:14:26.262 Multi-path I/O 00:14:26.262 May have multiple subsystem ports: No 00:14:26.262 May have multiple controllers: No 00:14:26.262 Associated with SR-IOV VF: No 00:14:26.262 Max Data Transfer Size: 524288 00:14:26.262 Max Number of Namespaces: 256 00:14:26.262 Max Number of I/O Queues: 64 00:14:26.262 NVMe Specification Version (VS): 1.4 00:14:26.262 NVMe Specification Version (Identify): 1.4 00:14:26.262 Maximum Queue Entries: 2048 00:14:26.262 Contiguous Queues Required: Yes 00:14:26.262 Arbitration Mechanisms Supported 00:14:26.262 Weighted Round Robin: Not Supported 00:14:26.262 Vendor Specific: Not Supported 00:14:26.262 Reset Timeout: 7500 ms 00:14:26.262 Doorbell Stride: 4 bytes 00:14:26.262 NVM Subsystem Reset: Not Supported 00:14:26.262 Command Sets Supported 00:14:26.262 NVM Command Set: Supported 00:14:26.262 Boot Partition: Not Supported 00:14:26.262 Memory Page Size Minimum: 4096 bytes 00:14:26.262 Memory Page Size Maximum: 65536 bytes 00:14:26.262 Persistent Memory Region: Not Supported 00:14:26.262 Optional Asynchronous Events Supported 00:14:26.262 Namespace Attribute Notices: Supported 00:14:26.262 Firmware Activation Notices: Not Supported 00:14:26.262 ANA Change Notices: Not Supported 00:14:26.262 PLE Aggregate Log Change Notices: Not Supported 00:14:26.262 LBA Status Info Alert Notices: Not Supported 00:14:26.262 EGE Aggregate Log Change Notices: Not Supported 00:14:26.262 Normal NVM Subsystem Shutdown event: Not Supported 00:14:26.262 Zone Descriptor Change Notices: Not Supported 00:14:26.262 Discovery Log Change Notices: Not Supported 00:14:26.262 Controller Attributes 00:14:26.262 128-bit Host Identifier: Not Supported 00:14:26.262 Non-Operational Permissive Mode: Not Supported 00:14:26.262 NVM Sets: Not Supported 00:14:26.262 Read Recovery Levels: Not Supported 00:14:26.262 Endurance Groups: Not Supported 00:14:26.262 Predictable Latency Mode: Not Supported 00:14:26.262 Traffic Based Keep ALive: Not Supported 00:14:26.262 Namespace Granularity: Not Supported 00:14:26.262 SQ Associations: Not Supported 00:14:26.262 UUID List: Not Supported 00:14:26.262 Multi-Domain Subsystem: Not Supported 00:14:26.262 Fixed Capacity Management: Not Supported 00:14:26.262 Variable Capacity Management: Not Supported 00:14:26.262 Delete Endurance Group: Not Supported 00:14:26.262 Delete NVM Set: Not Supported 00:14:26.262 Extended LBA Formats Supported: Supported 00:14:26.262 Flexible Data Placement Supported: Not Supported 00:14:26.262 00:14:26.262 Controller Memory Buffer Support 00:14:26.262 ================================ 00:14:26.262 Supported: No 00:14:26.262 00:14:26.262 Persistent Memory Region Support 00:14:26.262 ================================ 00:14:26.262 Supported: No 00:14:26.262 00:14:26.262 Admin Command Set Attributes 00:14:26.262 ============================ 00:14:26.262 Security Send/Receive: Not Supported 00:14:26.262 Format NVM: Supported 00:14:26.262 Firmware Activate/Download: Not Supported 00:14:26.262 Namespace Management: Supported 00:14:26.262 Device Self-Test: Not Supported 00:14:26.262 Directives: Supported 00:14:26.262 NVMe-MI: Not Supported 00:14:26.263 Virtualization Management: Not Supported 00:14:26.263 Doorbell Buffer Config: Supported 00:14:26.263 Get LBA Status Capability: Not Supported 00:14:26.263 Command & Feature Lockdown Capability: Not Supported 00:14:26.263 Abort Command Limit: 4 00:14:26.263 Async Event Request Limit: 4 00:14:26.263 Number of Firmware Slots: N/A 00:14:26.263 Firmware Slot 1 Read-Only: N/A 00:14:26.263 Firmware Activation Without Reset: N/A 00:14:26.263 Multiple Update Detection Support: N/A 00:14:26.263 Firmware Update Granularity: No Information Provided 00:14:26.263 Per-Namespace SMART Log: Yes 00:14:26.263 Asymmetric Namespace Access Log Page: Not Supported 00:14:26.263 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:14:26.263 Command Effects Log Page: Supported 00:14:26.263 Get Log Page Extended Data: Supported 00:14:26.263 Telemetry Log Pages: Not Supported 00:14:26.263 Persistent Event Log Pages: Not Supported 00:14:26.263 Supported Log Pages Log Page: May Support 00:14:26.263 Commands Supported & Effects Log Page: Not Supported 00:14:26.263 Feature Identifiers & Effects Log Page:May Support 00:14:26.263 NVMe-MI Commands & Effects Log Page: May Support 00:14:26.263 Data Area 4 for Telemetry Log: Not Supported 00:14:26.263 Error Log Page Entries Supported: 1 00:14:26.263 Keep Alive: Not Supported 00:14:26.263 00:14:26.263 NVM Command Set Attributes 00:14:26.263 ========================== 00:14:26.263 Submission Queue Entry Size 00:14:26.263 Max: 64 00:14:26.263 Min: 64 00:14:26.263 Completion Queue Entry Size 00:14:26.263 Max: 16 00:14:26.263 Min: 16 00:14:26.263 Number of Namespaces: 256 00:14:26.263 Compare Command: Supported 00:14:26.263 Write Uncorrectable Command: Not Supported 00:14:26.263 Dataset Management Command: Supported 00:14:26.263 Write Zeroes Command: Supported 00:14:26.263 Set Features Save Field: Supported 00:14:26.263 Reservations: Not Supported 00:14:26.263 Timestamp: Supported 00:14:26.263 Copy: Supported 00:14:26.263 Volatile Write Cache: Present 00:14:26.263 Atomic Write Unit (Normal): 1 00:14:26.263 Atomic Write Unit (PFail): 1 00:14:26.263 Atomic Compare & Write Unit: 1 00:14:26.263 Fused Compare & Write: Not Supported 00:14:26.263 Scatter-Gather List 00:14:26.263 SGL Command Set: Supported 00:14:26.263 SGL Keyed: Not Supported 00:14:26.263 SGL Bit Bucket Descriptor: Not Supported 00:14:26.263 SGL Metadata Pointer: Not Supported 00:14:26.263 Oversized SGL: Not Supported 00:14:26.263 SGL Metadata Address: Not Supported 00:14:26.263 SGL Offset: Not Supported 00:14:26.263 Transport SGL Data Block: Not Supported 00:14:26.263 Replay Protected Memory Block: Not Supported 00:14:26.263 00:14:26.263 Firmware Slot Information 00:14:26.263 ========================= 00:14:26.263 Active slot: 1 00:14:26.263 Slot 1 Firmware Revision: 1.0 00:14:26.263 00:14:26.263 00:14:26.263 Commands Supported and Effects 00:14:26.263 ============================== 00:14:26.263 Admin Commands 00:14:26.263 -------------- 00:14:26.263 Delete I/O Submission Queue (00h): Supported 00:14:26.263 Create I/O Submission Queue (01h): Supported 00:14:26.263 Get Log Page (02h): Supported 00:14:26.263 Delete I/O Completion Queue (04h): Supported 00:14:26.263 Create I/O Completion Queue (05h): Supported 00:14:26.263 Identify (06h): Supported 00:14:26.263 Abort (08h): Supported 00:14:26.263 Set Features (09h): Supported 00:14:26.263 Get Features (0Ah): Supported 00:14:26.263 Asynchronous Event Request (0Ch): Supported 00:14:26.263 Namespace Attachment (15h): Supported NS-Inventory-Change 00:14:26.263 Directive Send (19h): Supported 00:14:26.263 Directive Receive (1Ah): Supported 00:14:26.263 Virtualization Management (1Ch): Supported 00:14:26.263 Doorbell Buffer Config (7Ch): Supported 00:14:26.263 Format NVM (80h): Supported LBA-Change 00:14:26.263 I/O Commands 00:14:26.263 ------------ 00:14:26.263 Flush (00h): Supported LBA-Change 00:14:26.263 Write (01h): Supported LBA-Change 00:14:26.263 Read (02h): Supported 00:14:26.263 Compare (05h): Supported 00:14:26.263 Write Zeroes (08h): Supported LBA-Change 00:14:26.263 Dataset Management (09h): Supported LBA-Change 00:14:26.263 Unknown (0Ch): Supported 00:14:26.263 Unknown (12h): Supported 00:14:26.263 Copy (19h): Supported LBA-Change 00:14:26.263 Unknown (1Dh): Supported LBA-Change 00:14:26.263 00:14:26.263 Error Log 00:14:26.263 ========= 00:14:26.263 00:14:26.263 Arbitration 00:14:26.263 =========== 00:14:26.263 Arbitration Burst: no limit 00:14:26.263 00:14:26.263 Power Management 00:14:26.263 ================ 00:14:26.263 Number of Power States: 1 00:14:26.263 Current Power State: Power State #0 00:14:26.263 Power State #0: 00:14:26.263 Max Power: 25.00 W 00:14:26.263 Non-Operational State: Operational 00:14:26.263 Entry Latency: 16 microseconds 00:14:26.263 Exit Latency: 4 microseconds 00:14:26.263 Relative Read Throughput: 0 00:14:26.263 Relative Read Latency: 0 00:14:26.263 Relative Write Throughput: 0 00:14:26.263 Relative Write Latency: 0 00:14:26.263 Idle Power: Not Reported 00:14:26.263 Active Power: Not Reported 00:14:26.263 Non-Operational Permissive Mode: Not Supported 00:14:26.263 00:14:26.263 Health Information 00:14:26.263 ================== 00:14:26.263 Critical Warnings: 00:14:26.263 Available Spare Space: OK 00:14:26.263 Temperature: OK 00:14:26.263 Device Reliability: OK 00:14:26.263 Read Only: No 00:14:26.263 Volatile Memory Backup: OK 00:14:26.263 Current Temperature: 323 Kelvin (50 Celsius) 00:14:26.263 Temperature Threshold: 343 Kelvin (70 Celsius) 00:14:26.263 Available Spare: 0% 00:14:26.263 Available Spare Threshold: 0% 00:14:26.263 Life Percentage Used: 0% 00:14:26.263 Data Units Read: 990 00:14:26.263 Data Units Written: 850 00:14:26.263 Host Read Commands: 41918 00:14:26.263 Host Write Commands: 40611 00:14:26.263 Controller Busy Time: 0 minutes 00:14:26.263 Power Cycles: 0 00:14:26.263 Power On Hours: 0 hours 00:14:26.263 Unsafe Shutdowns: 0 00:14:26.263 Unrecoverable Media Errors: 0 00:14:26.263 Lifetime Error Log Entries: 0 00:14:26.263 Warning Temperature Time: 0 minutes 00:14:26.263 Critical Temperature Time: 0 minutes 00:14:26.263 00:14:26.263 Number of Queues 00:14:26.263 ================ 00:14:26.263 Number of I/O Submission Queues: 64 00:14:26.263 Number of I/O Completion Queues: 64 00:14:26.263 00:14:26.263 ZNS Specific Controller Data 00:14:26.263 ============================ 00:14:26.263 Zone Append Size Limit: 0 00:14:26.263 00:14:26.263 00:14:26.263 Active Namespaces 00:14:26.263 ================= 00:14:26.263 Namespace ID:1 00:14:26.263 Error Recovery Timeout: Unlimited 00:14:26.263 Command Set Identifier: NVM (00h) 00:14:26.263 Deallocate: Supported 00:14:26.263 Deallocated/Unwritten Error: Supported 00:14:26.263 Deallocated Read Value: All 0x00 00:14:26.263 Deallocate in Write Zeroes: Not Supported 00:14:26.263 Deallocated Guard Field: 0xFFFF 00:14:26.263 Flush: Supported 00:14:26.263 Reservation: Not Supported 00:14:26.263 Namespace Sharing Capabilities: Private 00:14:26.263 Size (in LBAs): 1310720 (5GiB) 00:14:26.263 Capacity (in LBAs): 1310720 (5GiB) 00:14:26.263 Utilization (in LBAs): 1310720 (5GiB) 00:14:26.263 Thin Provisioning: Not Supported 00:14:26.263 Per-NS Atomic Units: No 00:14:26.263 Maximum Single Source Range Length: 128 00:14:26.263 Maximum Copy Length: 128 00:14:26.263 Maximum Source Range Count: 128 00:14:26.263 NGUID/EUI64 Never Reused: No 00:14:26.263 Namespace Write Protected: No 00:14:26.263 Number of LBA Formats: 8 00:14:26.263 Current LBA Format: LBA Format #04 00:14:26.263 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:26.263 LBA Format #01: Data Size: 512 Metadata Size: 8 00:14:26.263 LBA Format #02: Data Size: 512 Metadata Size: 16 00:14:26.263 LBA Format #03: Data Size: 512 Metadata Size: 64 00:14:26.263 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:14:26.263 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:14:26.263 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:14:26.263 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:14:26.263 00:14:26.263 NVM Specific Namespace Data 00:14:26.263 =========================== 00:14:26.263 Logical Block Storage Tag Mask: 0 00:14:26.263 Protection Information Capabilities: 00:14:26.263 16b Guard Protection Information Storage Tag Support: No 00:14:26.263 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:14:26.263 Storage Tag Check Read Support: No 00:14:26.263 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:26.263 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:26.263 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:26.263 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:26.263 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:26.263 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:26.263 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:26.263 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:26.263 ===================================================== 00:14:26.264 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:14:26.264 ===================================================== 00:14:26.264 Controller Capabilities/Features 00:14:26.264 ================================ 00:14:26.264 Vendor ID: 1b36 00:14:26.264 Subsystem Vendor ID: 1af4 00:14:26.264 Serial Number: 12343 00:14:26.264 Model Number: QEMU NVMe Ctrl 00:14:26.264 Firmware Version: 8.0.0 00:14:26.264 Recommended Arb Burst: 6 00:14:26.264 IEEE OUI Identifier: 00 54 52 00:14:26.264 Multi-path I/O 00:14:26.264 May have multiple subsystem ports: No 00:14:26.264 May have multiple controllers: Yes 00:14:26.264 Associated with SR-IOV VF: No 00:14:26.264 Max Data Transfer Size: 524288 00:14:26.264 Max Number of Namespaces: 256 00:14:26.264 Max Number of I/O Queues: 64 00:14:26.264 NVMe Specification Version (VS): 1.4 00:14:26.264 NVMe Specification Version (Identify): 1.4 00:14:26.264 Maximum Queue Entries: 2048 00:14:26.264 Contiguous Queues Required: Yes 00:14:26.264 Arbitration Mechanisms Supported 00:14:26.264 Weighted Round Robin: Not Supported 00:14:26.264 Vendor Specific: Not Supported 00:14:26.264 Reset Timeout: 7500 ms 00:14:26.264 Doorbell Stride: 4 bytes 00:14:26.264 NVM Subsystem Reset: Not Supported 00:14:26.264 Command Sets Supported 00:14:26.264 NVM Command Set: Supported 00:14:26.264 Boot Partition: Not Supported 00:14:26.264 Memory Page Size Minimum: 4096 bytes 00:14:26.264 Memory Page Size Maximum: 65536 bytes 00:14:26.264 Persistent Memory Region: Not Supported 00:14:26.264 Optional Asynchronous Events Supported 00:14:26.264 Namespace Attribute Notices: Supported 00:14:26.264 Firmware Activation Notices: Not Supported 00:14:26.264 ANA Change Notices: Not Supported 00:14:26.264 PLE Aggregate Log Change Notices: Not Supported 00:14:26.264 LBA Status Info Alert Notices: Not Supported 00:14:26.264 EGE Aggregate Log Change Notices: Not Supported 00:14:26.264 Normal NVM Subsystem Shutdown event: Not Supported 00:14:26.264 Zone Descriptor Change Notices: Not Supported 00:14:26.264 Discovery Log Change Notices: Not Supported 00:14:26.264 Controller Attributes 00:14:26.264 128-bit Host Identifier: Not Supported 00:14:26.264 Non-Operational Permissive Mode: Not Supported 00:14:26.264 NVM Sets: Not Supported 00:14:26.264 Read Recovery Levels: Not Supported 00:14:26.264 Endurance Groups: Supported 00:14:26.264 Predictable Latency Mode: Not Supported 00:14:26.264 Traffic Based Keep ALive: Not Supported 00:14:26.264 Namespace Granularity: Not Supported 00:14:26.264 SQ Associations: Not Supported 00:14:26.264 UUID List: Not Supported 00:14:26.264 Multi-Domain Subsystem: Not Supported 00:14:26.264 Fixed Capacity Management: Not Supported 00:14:26.264 Variable Capacity Management: Not Supported 00:14:26.264 Delete Endurance Group: Not Supported 00:14:26.264 Delete NVM Set: Not Supported 00:14:26.264 Extended LBA Formats Supported: Supported 00:14:26.264 Flexible Data Placement Supported: Supported 00:14:26.264 00:14:26.264 Controller Memory Buffer Support 00:14:26.264 ================================ 00:14:26.264 Supported: No 00:14:26.264 00:14:26.264 Persistent Memory Region Support 00:14:26.264 ================================ 00:14:26.264 Supported: No 00:14:26.264 00:14:26.264 Admin Command Set Attributes 00:14:26.264 ============================ 00:14:26.264 Security Send/Receive: Not Supported 00:14:26.264 Format NVM: Supported 00:14:26.264 Firmware Activate/Download: Not Supported 00:14:26.264 Namespace Management: Supported 00:14:26.264 Device Self-Test: Not Supported 00:14:26.264 Directives: Supported 00:14:26.264 NVMe-MI: Not Supported 00:14:26.264 Virtualization Management: Not Supported 00:14:26.264 Doorbell Buffer Config: Supported 00:14:26.264 Get LBA Status Capability: Not Supported 00:14:26.264 Command & Feature Lockdown Capability: Not Supported 00:14:26.264 Abort Command Limit: 4 00:14:26.264 Async Event Request Limit: 4 00:14:26.264 Number of Firmware Slots: N/A 00:14:26.264 Firmware Slot 1 Read-Only: N/A 00:14:26.264 Firmware Activation Without Reset: N/A 00:14:26.264 Multiple Update Detection Support: N/A 00:14:26.264 Firmware Update Granularity: No Information Provided 00:14:26.264 Per-Namespace SMART Log: Yes 00:14:26.264 Asymmetric Namespace Access Log Page: Not Supported 00:14:26.264 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:14:26.264 Command Effects Log Page: Supported 00:14:26.264 Get Log Page Extended Data: Supported 00:14:26.264 Telemetry Log Pages: Not Supported 00:14:26.264 Persistent Event Log Pages: Not Supported 00:14:26.264 Supported Log Pages Log Page: May Support 00:14:26.264 Commands Supported & Effects Log Page: Not Supported 00:14:26.264 Feature Identifiers & Effects Log Page:May Support 00:14:26.264 NVMe-MI Commands & Effects Log Page: May Support 00:14:26.264 Data Area 4 for Telemetry Log: Not Supported 00:14:26.264 Error Log Page Entries Supported: 1 00:14:26.264 Keep Alive: Not Supported 00:14:26.264 00:14:26.264 NVM Command Set Attributes 00:14:26.264 ========================== 00:14:26.264 Submission Queue Entry Size 00:14:26.264 Max: 64 00:14:26.264 Min: 64 00:14:26.264 Completion Queue Entry Size 00:14:26.264 Max: 16 00:14:26.264 Min: 16 00:14:26.264 Number of Namespaces: 256 00:14:26.264 Compare Command: Supported 00:14:26.264 Write Uncorrectable Command: Not Supported 00:14:26.264 Dataset Management Command: Supported 00:14:26.264 Write Zeroes Command: Supported 00:14:26.264 Set Features Save Field: Supported 00:14:26.264 Reservations: Not Supported 00:14:26.264 Timestamp: Supported 00:14:26.264 Copy: Supported 00:14:26.264 Volatile Write Cache: Present 00:14:26.264 Atomic Write Unit (Normal): 1 00:14:26.264 Atomic Write Unit (PFail): 1 00:14:26.264 Atomic Compare & Write Unit: 1 00:14:26.264 Fused Compare & Write: Not Supported 00:14:26.264 Scatter-Gather List 00:14:26.264 SGL Command Set: Supported 00:14:26.264 SGL Keyed: Not Supported 00:14:26.264 SGL Bit Bucket Descriptor: Not Supported 00:14:26.264 SGL Metadata Pointer: Not Supported 00:14:26.264 Oversized SGL: Not Supported 00:14:26.264 SGL Metadata Address: Not Supported 00:14:26.264 SGL Offset: Not Supported 00:14:26.264 Transport SGL Data Block: Not Supported 00:14:26.264 Replay Protected Memory Block: Not Supported 00:14:26.264 00:14:26.264 Firmware Slot Information 00:14:26.264 ========================= 00:14:26.264 Active slot: 1 00:14:26.264 Slot 1 Firmware Revision: 1.0 00:14:26.264 00:14:26.264 00:14:26.264 Commands Supported and Effects 00:14:26.264 ============================== 00:14:26.264 Admin Commands 00:14:26.264 -------------- 00:14:26.264 Delete I/O Submission Queue (00h): Supported 00:14:26.264 Create I/O Submission Queue (01h): Supported 00:14:26.264 Get Log Page (02h): Supported 00:14:26.264 Delete I/O Completion Queue (04h): Supported 00:14:26.264 Create I/O Completion Queue (05h): Supported 00:14:26.264 Identify (06h): Supported 00:14:26.264 Abort (08h): Supported 00:14:26.264 Set Features (09h): Supported 00:14:26.264 Get Features (0Ah): Supported 00:14:26.264 Asynchronous Event Request (0Ch): Supported 00:14:26.264 Namespace Attachment (15h): Supported NS-Inventory-Change 00:14:26.264 Directive Send (19h): Supported 00:14:26.264 Directive Receive (1Ah): Supported 00:14:26.264 Virtualization Management (1Ch): Supported 00:14:26.264 Doorbell Buffer Config (7Ch): Supported 00:14:26.264 Format NVM (80h): Supported LBA-Change 00:14:26.264 I/O Commands 00:14:26.264 ------------ 00:14:26.264 Flush (00h): Supported LBA-Change 00:14:26.264 Write (01h): Supported LBA-Change 00:14:26.264 Read (02h): Supported 00:14:26.264 Compare (05h): Supported 00:14:26.264 Write Zeroes (08h): Supported LBA-Change 00:14:26.264 Dataset Management (09h): Supported LBA-Change 00:14:26.264 Unknown (0Ch): Supported 00:14:26.264 Unknown (12h): Supported 00:14:26.264 Copy (19h): Supported LBA-Change 00:14:26.264 Unknown (1Dh): Supported LBA-Change 00:14:26.264 00:14:26.264 Error Log 00:14:26.264 ========= 00:14:26.264 00:14:26.264 Arbitration 00:14:26.264 =========== 00:14:26.264 Arbitration Burst: no limit 00:14:26.264 00:14:26.264 Power Management 00:14:26.264 ================ 00:14:26.264 Number of Power States: 1 00:14:26.264 Current Power State: Power State #0 00:14:26.264 Power State #0: 00:14:26.264 Max Power: 25.00 W 00:14:26.264 Non-Operational State: Operational 00:14:26.264 Entry Latency: 16 microseconds 00:14:26.264 Exit Latency: 4 microseconds 00:14:26.264 Relative Read Throughput: 0 00:14:26.264 Relative Read Latency: 0 00:14:26.264 Relative Write Throughput: 0 00:14:26.264 Relative Write Latency: 0 00:14:26.264 Idle Power: Not Reported 00:14:26.264 Active Power: Not Reported 00:14:26.265 Non-Operational Permissive Mode: Not Supported 00:14:26.265 00:14:26.265 Health Information 00:14:26.265 ================== 00:14:26.265 Critical Warnings: 00:14:26.265 Available Spare Space: OK 00:14:26.265 Temperature: OK 00:14:26.265 Device Reliability: OK 00:14:26.265 Read Only: No 00:14:26.265 Volatile Memory Backup: OK 00:14:26.265 Current Temperature: 323 Kelvin (50 Celsius) 00:14:26.265 Temperature Threshold: 343 Kelvin (70 Celsius) 00:14:26.265 Available Spare: 0% 00:14:26.265 Available Spare Threshold: 0% 00:14:26.265 Life Percentage Used: 0% 00:14:26.265 Data Units Read: 744 00:14:26.265 Data Units Written: 673 00:14:26.265 Host Read Commands: 29711 00:14:26.265 Host Write Commands: 29134 00:14:26.265 Controller Busy Time: 0 minutes 00:14:26.265 Power Cycles: 0 00:14:26.265 Power On Hours: 0 hours 00:14:26.265 Unsafe Shutdowns: 0 00:14:26.265 Unrecoverable Media Errors: 0 00:14:26.265 Lifetime Error Log Entries: 0 00:14:26.265 Warning Temperature Time: 0 minutes 00:14:26.265 Critical Temperature Time: 0 minutes 00:14:26.265 00:14:26.265 Number of Queues 00:14:26.265 ================ 00:14:26.265 Number of I/O Submission Queues: 64 00:14:26.265 Number of I/O Completion Queues: 64 00:14:26.265 00:14:26.265 ZNS Specific Controller Data 00:14:26.265 ============================ 00:14:26.265 Zone Append Size Limit: 0 00:14:26.265 00:14:26.265 00:14:26.265 Active Namespaces 00:14:26.265 ================= 00:14:26.265 Namespace ID:1 00:14:26.265 Error Recovery Timeout: Unlimited 00:14:26.265 Command Set Identifier: NVM (00h) 00:14:26.265 Deallocate: Supported 00:14:26.265 Deallocated/Unwritten Error: Supported 00:14:26.265 Deallocated Read Value: All 0x00 00:14:26.265 Deallocate in Write Zeroes: Not Supported 00:14:26.265 Deallocated Guard Field: 0xFFFF 00:14:26.265 Flush: Supported 00:14:26.265 Reservation: Not Supported 00:14:26.265 Namespace Sharing Capabilities: Multiple Controllers 00:14:26.265 Size (in LBAs): 262144 (1GiB) 00:14:26.265 Capacity (in LBAs): 262144 (1GiB) 00:14:26.265 Utilization (in LBAs): 262144 (1GiB) 00:14:26.265 Thin Provisioning: Not Supported 00:14:26.265 Per-NS Atomic Units: No 00:14:26.265 Maximum Single Source Range Length: 128 00:14:26.265 Maximum Copy Length: 128 00:14:26.265 Maximum Source Range Count: 128 00:14:26.265 NGUID/EUI64 Never Reused: No 00:14:26.265 Namespace Write Protected: No 00:14:26.265 Endurance group ID: 1 00:14:26.265 Number of LBA Formats: 8 00:14:26.265 Current LBA Format: LBA Format #04 00:14:26.265 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:26.265 LBA Format #01: Data Size: 512 Metadata Size: 8 00:14:26.265 LBA Format #02: Data Size: 512 Metadata Size: 16 00:14:26.265 LBA Format #03: Data Size: 512 Metadata Size: 64 00:14:26.265 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:14:26.265 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:14:26.265 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:14:26.265 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:14:26.265 00:14:26.265 Get Feature FDP: 00:14:26.265 ================ 00:14:26.265 Enabled: Yes 00:14:26.265 FDP configuration index: 0 00:14:26.265 00:14:26.265 FDP configurations log page 00:14:26.265 =========================== 00:14:26.265 Number of FDP configurations: 1 00:14:26.265 Version: 0 00:14:26.265 Size: 112 00:14:26.265 FDP Configuration Descriptor: 0 00:14:26.265 Descriptor Size: 96 00:14:26.265 Reclaim Group Identifier format: 2 00:14:26.265 FDP Volatile Write Cache: Not Present 00:14:26.265 FDP Configuration: Valid 00:14:26.265 Vendor Specific Size: 0 00:14:26.265 Number of Reclaim Groups: 2 00:14:26.265 Number of Recalim Unit Handles: 8 00:14:26.265 Max Placement Identifiers: 128 00:14:26.265 Number of Namespaces Suppprted: 256 00:14:26.265 Reclaim unit Nominal Size: 6000000 bytes 00:14:26.265 Estimated Reclaim Unit Time Limit: Not Reported 00:14:26.265 RUH Desc #000: RUH Type: Initially Isolated 00:14:26.265 RUH Desc #001: RUH Type: Initially Isolated 00:14:26.265 RUH Desc #002: RUH Type: Initially Isolated 00:14:26.265 RUH Desc #003: RUH Type: Initially Isolated 00:14:26.265 RUH Desc #004: RUH Type: Initially Isolated 00:14:26.265 RUH Desc #005: RUH Type: Initially Isolated 00:14:26.265 RUH Desc #006: RUH Type: Initially Isolated 00:14:26.265 RUH Desc #007: RUH Type: Initially Isolated 00:14:26.265 00:14:26.265 FDP reclaim unit handle usage log page 00:14:26.265 ====================================== 00:14:26.265 Number of Reclaim Unit Handles: 8 00:14:26.265 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:14:26.265 RUH Usage Desc #001: RUH Attributes: Unused 00:14:26.265 RUH Usage Desc #002: RUH Attributes: Unused 00:14:26.265 RUH Usage Desc #003: RUH Attributes: Unused 00:14:26.265 RUH Usage Desc #004: RUH Attributes: Unused 00:14:26.265 RUH Usage Desc #005: RUH Attributes: Unused 00:14:26.265 RUH Usage Desc #006: RUH Attributes: Unused 00:14:26.265 RUH Usage Desc #007: RUH Attributes: Unused 00:14:26.265 00:14:26.265 FDP statistics log page 00:14:26.265 ======================= 00:14:26.265 Host bytes with metadata written: 425435136 00:14:26.265 Media[2024-12-09 10:05:56.794173] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 64629 terminated unexpected 00:14:26.265 [2024-12-09 10:05:56.794847] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 64629 terminated unexpected 00:14:26.265 [2024-12-09 10:05:56.797196] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 64629 terminated unexpected 00:14:26.265 bytes with metadata written: 425480192 00:14:26.265 Media bytes erased: 0 00:14:26.265 00:14:26.265 FDP events log page 00:14:26.265 =================== 00:14:26.265 Number of FDP events: 0 00:14:26.265 00:14:26.265 NVM Specific Namespace Data 00:14:26.265 =========================== 00:14:26.265 Logical Block Storage Tag Mask: 0 00:14:26.265 Protection Information Capabilities: 00:14:26.265 16b Guard Protection Information Storage Tag Support: No 00:14:26.265 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:14:26.265 Storage Tag Check Read Support: No 00:14:26.265 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:26.265 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:26.265 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:26.265 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:26.265 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:26.265 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:26.265 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:26.265 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:26.265 ===================================================== 00:14:26.265 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:14:26.265 ===================================================== 00:14:26.265 Controller Capabilities/Features 00:14:26.265 ================================ 00:14:26.265 Vendor ID: 1b36 00:14:26.265 Subsystem Vendor ID: 1af4 00:14:26.265 Serial Number: 12342 00:14:26.265 Model Number: QEMU NVMe Ctrl 00:14:26.265 Firmware Version: 8.0.0 00:14:26.265 Recommended Arb Burst: 6 00:14:26.265 IEEE OUI Identifier: 00 54 52 00:14:26.265 Multi-path I/O 00:14:26.265 May have multiple subsystem ports: No 00:14:26.265 May have multiple controllers: No 00:14:26.265 Associated with SR-IOV VF: No 00:14:26.265 Max Data Transfer Size: 524288 00:14:26.265 Max Number of Namespaces: 256 00:14:26.265 Max Number of I/O Queues: 64 00:14:26.265 NVMe Specification Version (VS): 1.4 00:14:26.265 NVMe Specification Version (Identify): 1.4 00:14:26.265 Maximum Queue Entries: 2048 00:14:26.265 Contiguous Queues Required: Yes 00:14:26.265 Arbitration Mechanisms Supported 00:14:26.265 Weighted Round Robin: Not Supported 00:14:26.265 Vendor Specific: Not Supported 00:14:26.265 Reset Timeout: 7500 ms 00:14:26.265 Doorbell Stride: 4 bytes 00:14:26.266 NVM Subsystem Reset: Not Supported 00:14:26.266 Command Sets Supported 00:14:26.266 NVM Command Set: Supported 00:14:26.266 Boot Partition: Not Supported 00:14:26.266 Memory Page Size Minimum: 4096 bytes 00:14:26.266 Memory Page Size Maximum: 65536 bytes 00:14:26.266 Persistent Memory Region: Not Supported 00:14:26.266 Optional Asynchronous Events Supported 00:14:26.266 Namespace Attribute Notices: Supported 00:14:26.266 Firmware Activation Notices: Not Supported 00:14:26.266 ANA Change Notices: Not Supported 00:14:26.266 PLE Aggregate Log Change Notices: Not Supported 00:14:26.266 LBA Status Info Alert Notices: Not Supported 00:14:26.266 EGE Aggregate Log Change Notices: Not Supported 00:14:26.266 Normal NVM Subsystem Shutdown event: Not Supported 00:14:26.266 Zone Descriptor Change Notices: Not Supported 00:14:26.266 Discovery Log Change Notices: Not Supported 00:14:26.266 Controller Attributes 00:14:26.266 128-bit Host Identifier: Not Supported 00:14:26.266 Non-Operational Permissive Mode: Not Supported 00:14:26.266 NVM Sets: Not Supported 00:14:26.266 Read Recovery Levels: Not Supported 00:14:26.266 Endurance Groups: Not Supported 00:14:26.266 Predictable Latency Mode: Not Supported 00:14:26.266 Traffic Based Keep ALive: Not Supported 00:14:26.266 Namespace Granularity: Not Supported 00:14:26.266 SQ Associations: Not Supported 00:14:26.266 UUID List: Not Supported 00:14:26.266 Multi-Domain Subsystem: Not Supported 00:14:26.266 Fixed Capacity Management: Not Supported 00:14:26.266 Variable Capacity Management: Not Supported 00:14:26.266 Delete Endurance Group: Not Supported 00:14:26.266 Delete NVM Set: Not Supported 00:14:26.266 Extended LBA Formats Supported: Supported 00:14:26.266 Flexible Data Placement Supported: Not Supported 00:14:26.266 00:14:26.266 Controller Memory Buffer Support 00:14:26.266 ================================ 00:14:26.266 Supported: No 00:14:26.266 00:14:26.266 Persistent Memory Region Support 00:14:26.266 ================================ 00:14:26.266 Supported: No 00:14:26.266 00:14:26.266 Admin Command Set Attributes 00:14:26.266 ============================ 00:14:26.266 Security Send/Receive: Not Supported 00:14:26.266 Format NVM: Supported 00:14:26.266 Firmware Activate/Download: Not Supported 00:14:26.266 Namespace Management: Supported 00:14:26.266 Device Self-Test: Not Supported 00:14:26.266 Directives: Supported 00:14:26.266 NVMe-MI: Not Supported 00:14:26.266 Virtualization Management: Not Supported 00:14:26.266 Doorbell Buffer Config: Supported 00:14:26.266 Get LBA Status Capability: Not Supported 00:14:26.266 Command & Feature Lockdown Capability: Not Supported 00:14:26.266 Abort Command Limit: 4 00:14:26.266 Async Event Request Limit: 4 00:14:26.266 Number of Firmware Slots: N/A 00:14:26.266 Firmware Slot 1 Read-Only: N/A 00:14:26.266 Firmware Activation Without Reset: N/A 00:14:26.266 Multiple Update Detection Support: N/A 00:14:26.266 Firmware Update Granularity: No Information Provided 00:14:26.266 Per-Namespace SMART Log: Yes 00:14:26.266 Asymmetric Namespace Access Log Page: Not Supported 00:14:26.266 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:14:26.266 Command Effects Log Page: Supported 00:14:26.266 Get Log Page Extended Data: Supported 00:14:26.266 Telemetry Log Pages: Not Supported 00:14:26.266 Persistent Event Log Pages: Not Supported 00:14:26.266 Supported Log Pages Log Page: May Support 00:14:26.266 Commands Supported & Effects Log Page: Not Supported 00:14:26.266 Feature Identifiers & Effects Log Page:May Support 00:14:26.266 NVMe-MI Commands & Effects Log Page: May Support 00:14:26.266 Data Area 4 for Telemetry Log: Not Supported 00:14:26.266 Error Log Page Entries Supported: 1 00:14:26.266 Keep Alive: Not Supported 00:14:26.266 00:14:26.266 NVM Command Set Attributes 00:14:26.266 ========================== 00:14:26.266 Submission Queue Entry Size 00:14:26.266 Max: 64 00:14:26.266 Min: 64 00:14:26.266 Completion Queue Entry Size 00:14:26.266 Max: 16 00:14:26.266 Min: 16 00:14:26.266 Number of Namespaces: 256 00:14:26.266 Compare Command: Supported 00:14:26.266 Write Uncorrectable Command: Not Supported 00:14:26.266 Dataset Management Command: Supported 00:14:26.266 Write Zeroes Command: Supported 00:14:26.266 Set Features Save Field: Supported 00:14:26.266 Reservations: Not Supported 00:14:26.266 Timestamp: Supported 00:14:26.266 Copy: Supported 00:14:26.266 Volatile Write Cache: Present 00:14:26.266 Atomic Write Unit (Normal): 1 00:14:26.266 Atomic Write Unit (PFail): 1 00:14:26.266 Atomic Compare & Write Unit: 1 00:14:26.266 Fused Compare & Write: Not Supported 00:14:26.266 Scatter-Gather List 00:14:26.266 SGL Command Set: Supported 00:14:26.266 SGL Keyed: Not Supported 00:14:26.266 SGL Bit Bucket Descriptor: Not Supported 00:14:26.266 SGL Metadata Pointer: Not Supported 00:14:26.266 Oversized SGL: Not Supported 00:14:26.266 SGL Metadata Address: Not Supported 00:14:26.266 SGL Offset: Not Supported 00:14:26.266 Transport SGL Data Block: Not Supported 00:14:26.266 Replay Protected Memory Block: Not Supported 00:14:26.266 00:14:26.266 Firmware Slot Information 00:14:26.266 ========================= 00:14:26.266 Active slot: 1 00:14:26.266 Slot 1 Firmware Revision: 1.0 00:14:26.266 00:14:26.266 00:14:26.266 Commands Supported and Effects 00:14:26.266 ============================== 00:14:26.266 Admin Commands 00:14:26.266 -------------- 00:14:26.266 Delete I/O Submission Queue (00h): Supported 00:14:26.266 Create I/O Submission Queue (01h): Supported 00:14:26.266 Get Log Page (02h): Supported 00:14:26.266 Delete I/O Completion Queue (04h): Supported 00:14:26.266 Create I/O Completion Queue (05h): Supported 00:14:26.266 Identify (06h): Supported 00:14:26.266 Abort (08h): Supported 00:14:26.266 Set Features (09h): Supported 00:14:26.266 Get Features (0Ah): Supported 00:14:26.266 Asynchronous Event Request (0Ch): Supported 00:14:26.266 Namespace Attachment (15h): Supported NS-Inventory-Change 00:14:26.266 Directive Send (19h): Supported 00:14:26.266 Directive Receive (1Ah): Supported 00:14:26.266 Virtualization Management (1Ch): Supported 00:14:26.266 Doorbell Buffer Config (7Ch): Supported 00:14:26.266 Format NVM (80h): Supported LBA-Change 00:14:26.266 I/O Commands 00:14:26.266 ------------ 00:14:26.266 Flush (00h): Supported LBA-Change 00:14:26.266 Write (01h): Supported LBA-Change 00:14:26.266 Read (02h): Supported 00:14:26.266 Compare (05h): Supported 00:14:26.266 Write Zeroes (08h): Supported LBA-Change 00:14:26.266 Dataset Management (09h): Supported LBA-Change 00:14:26.266 Unknown (0Ch): Supported 00:14:26.266 Unknown (12h): Supported 00:14:26.266 Copy (19h): Supported LBA-Change 00:14:26.266 Unknown (1Dh): Supported LBA-Change 00:14:26.266 00:14:26.266 Error Log 00:14:26.266 ========= 00:14:26.266 00:14:26.266 Arbitration 00:14:26.266 =========== 00:14:26.266 Arbitration Burst: no limit 00:14:26.266 00:14:26.266 Power Management 00:14:26.266 ================ 00:14:26.266 Number of Power States: 1 00:14:26.266 Current Power State: Power State #0 00:14:26.266 Power State #0: 00:14:26.266 Max Power: 25.00 W 00:14:26.266 Non-Operational State: Operational 00:14:26.266 Entry Latency: 16 microseconds 00:14:26.266 Exit Latency: 4 microseconds 00:14:26.266 Relative Read Throughput: 0 00:14:26.266 Relative Read Latency: 0 00:14:26.266 Relative Write Throughput: 0 00:14:26.266 Relative Write Latency: 0 00:14:26.266 Idle Power: Not Reported 00:14:26.266 Active Power: Not Reported 00:14:26.266 Non-Operational Permissive Mode: Not Supported 00:14:26.266 00:14:26.266 Health Information 00:14:26.266 ================== 00:14:26.266 Critical Warnings: 00:14:26.266 Available Spare Space: OK 00:14:26.266 Temperature: OK 00:14:26.266 Device Reliability: OK 00:14:26.266 Read Only: No 00:14:26.266 Volatile Memory Backup: OK 00:14:26.266 Current Temperature: 323 Kelvin (50 Celsius) 00:14:26.266 Temperature Threshold: 343 Kelvin (70 Celsius) 00:14:26.266 Available Spare: 0% 00:14:26.266 Available Spare Threshold: 0% 00:14:26.266 Life Percentage Used: 0% 00:14:26.266 Data Units Read: 2023 00:14:26.266 Data Units Written: 1810 00:14:26.266 Host Read Commands: 86843 00:14:26.266 Host Write Commands: 85112 00:14:26.266 Controller Busy Time: 0 minutes 00:14:26.266 Power Cycles: 0 00:14:26.266 Power On Hours: 0 hours 00:14:26.266 Unsafe Shutdowns: 0 00:14:26.266 Unrecoverable Media Errors: 0 00:14:26.266 Lifetime Error Log Entries: 0 00:14:26.266 Warning Temperature Time: 0 minutes 00:14:26.266 Critical Temperature Time: 0 minutes 00:14:26.266 00:14:26.266 Number of Queues 00:14:26.266 ================ 00:14:26.266 Number of I/O Submission Queues: 64 00:14:26.267 Number of I/O Completion Queues: 64 00:14:26.267 00:14:26.267 ZNS Specific Controller Data 00:14:26.267 ============================ 00:14:26.267 Zone Append Size Limit: 0 00:14:26.267 00:14:26.267 00:14:26.267 Active Namespaces 00:14:26.267 ================= 00:14:26.267 Namespace ID:1 00:14:26.267 Error Recovery Timeout: Unlimited 00:14:26.267 Command Set Identifier: NVM (00h) 00:14:26.267 Deallocate: Supported 00:14:26.267 Deallocated/Unwritten Error: Supported 00:14:26.267 Deallocated Read Value: All 0x00 00:14:26.267 Deallocate in Write Zeroes: Not Supported 00:14:26.267 Deallocated Guard Field: 0xFFFF 00:14:26.267 Flush: Supported 00:14:26.267 Reservation: Not Supported 00:14:26.267 Namespace Sharing Capabilities: Private 00:14:26.267 Size (in LBAs): 1048576 (4GiB) 00:14:26.267 Capacity (in LBAs): 1048576 (4GiB) 00:14:26.267 Utilization (in LBAs): 1048576 (4GiB) 00:14:26.267 Thin Provisioning: Not Supported 00:14:26.267 Per-NS Atomic Units: No 00:14:26.267 Maximum Single Source Range Length: 128 00:14:26.267 Maximum Copy Length: 128 00:14:26.267 Maximum Source Range Count: 128 00:14:26.267 NGUID/EUI64 Never Reused: No 00:14:26.267 Namespace Write Protected: No 00:14:26.267 Number of LBA Formats: 8 00:14:26.267 Current LBA Format: LBA Format #04 00:14:26.267 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:26.267 LBA Format #01: Data Size: 512 Metadata Size: 8 00:14:26.267 LBA Format #02: Data Size: 512 Metadata Size: 16 00:14:26.267 LBA Format #03: Data Size: 512 Metadata Size: 64 00:14:26.267 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:14:26.267 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:14:26.267 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:14:26.267 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:14:26.267 00:14:26.267 NVM Specific Namespace Data 00:14:26.267 =========================== 00:14:26.267 Logical Block Storage Tag Mask: 0 00:14:26.267 Protection Information Capabilities: 00:14:26.267 16b Guard Protection Information Storage Tag Support: No 00:14:26.267 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:14:26.267 Storage Tag Check Read Support: No 00:14:26.267 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:26.267 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:26.267 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:26.267 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:26.267 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:26.267 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:26.267 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:26.267 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:26.267 Namespace ID:2 00:14:26.267 Error Recovery Timeout: Unlimited 00:14:26.267 Command Set Identifier: NVM (00h) 00:14:26.267 Deallocate: Supported 00:14:26.267 Deallocated/Unwritten Error: Supported 00:14:26.267 Deallocated Read Value: All 0x00 00:14:26.267 Deallocate in Write Zeroes: Not Supported 00:14:26.267 Deallocated Guard Field: 0xFFFF 00:14:26.267 Flush: Supported 00:14:26.267 Reservation: Not Supported 00:14:26.267 Namespace Sharing Capabilities: Private 00:14:26.267 Size (in LBAs): 1048576 (4GiB) 00:14:26.267 Capacity (in LBAs): 1048576 (4GiB) 00:14:26.267 Utilization (in LBAs): 1048576 (4GiB) 00:14:26.267 Thin Provisioning: Not Supported 00:14:26.267 Per-NS Atomic Units: No 00:14:26.267 Maximum Single Source Range Length: 128 00:14:26.267 Maximum Copy Length: 128 00:14:26.267 Maximum Source Range Count: 128 00:14:26.267 NGUID/EUI64 Never Reused: No 00:14:26.267 Namespace Write Protected: No 00:14:26.267 Number of LBA Formats: 8 00:14:26.267 Current LBA Format: LBA Format #04 00:14:26.267 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:26.267 LBA Format #01: Data Size: 512 Metadata Size: 8 00:14:26.267 LBA Format #02: Data Size: 512 Metadata Size: 16 00:14:26.267 LBA Format #03: Data Size: 512 Metadata Size: 64 00:14:26.267 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:14:26.267 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:14:26.267 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:14:26.267 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:14:26.267 00:14:26.267 NVM Specific Namespace Data 00:14:26.267 =========================== 00:14:26.267 Logical Block Storage Tag Mask: 0 00:14:26.267 Protection Information Capabilities: 00:14:26.267 16b Guard Protection Information Storage Tag Support: No 00:14:26.267 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:14:26.267 Storage Tag Check Read Support: No 00:14:26.267 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:26.267 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:26.267 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:26.267 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:26.267 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:26.267 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:26.267 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:26.267 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:26.267 Namespace ID:3 00:14:26.267 Error Recovery Timeout: Unlimited 00:14:26.267 Command Set Identifier: NVM (00h) 00:14:26.267 Deallocate: Supported 00:14:26.267 Deallocated/Unwritten Error: Supported 00:14:26.267 Deallocated Read Value: All 0x00 00:14:26.267 Deallocate in Write Zeroes: Not Supported 00:14:26.267 Deallocated Guard Field: 0xFFFF 00:14:26.267 Flush: Supported 00:14:26.267 Reservation: Not Supported 00:14:26.267 Namespace Sharing Capabilities: Private 00:14:26.267 Size (in LBAs): 1048576 (4GiB) 00:14:26.267 Capacity (in LBAs): 1048576 (4GiB) 00:14:26.267 Utilization (in LBAs): 1048576 (4GiB) 00:14:26.267 Thin Provisioning: Not Supported 00:14:26.267 Per-NS Atomic Units: No 00:14:26.267 Maximum Single Source Range Length: 128 00:14:26.267 Maximum Copy Length: 128 00:14:26.267 Maximum Source Range Count: 128 00:14:26.267 NGUID/EUI64 Never Reused: No 00:14:26.267 Namespace Write Protected: No 00:14:26.267 Number of LBA Formats: 8 00:14:26.267 Current LBA Format: LBA Format #04 00:14:26.267 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:26.267 LBA Format #01: Data Size: 512 Metadata Size: 8 00:14:26.267 LBA Format #02: Data Size: 512 Metadata Size: 16 00:14:26.267 LBA Format #03: Data Size: 512 Metadata Size: 64 00:14:26.267 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:14:26.267 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:14:26.267 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:14:26.267 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:14:26.267 00:14:26.267 NVM Specific Namespace Data 00:14:26.267 =========================== 00:14:26.267 Logical Block Storage Tag Mask: 0 00:14:26.267 Protection Information Capabilities: 00:14:26.267 16b Guard Protection Information Storage Tag Support: No 00:14:26.267 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:14:26.267 Storage Tag Check Read Support: No 00:14:26.267 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:26.267 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:26.267 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:26.267 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:26.267 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:26.267 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:26.267 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:26.267 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:26.267 10:05:56 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:14:26.267 10:05:56 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:14:26.525 ===================================================== 00:14:26.526 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:26.526 ===================================================== 00:14:26.526 Controller Capabilities/Features 00:14:26.526 ================================ 00:14:26.526 Vendor ID: 1b36 00:14:26.526 Subsystem Vendor ID: 1af4 00:14:26.526 Serial Number: 12340 00:14:26.526 Model Number: QEMU NVMe Ctrl 00:14:26.526 Firmware Version: 8.0.0 00:14:26.526 Recommended Arb Burst: 6 00:14:26.526 IEEE OUI Identifier: 00 54 52 00:14:26.526 Multi-path I/O 00:14:26.526 May have multiple subsystem ports: No 00:14:26.526 May have multiple controllers: No 00:14:26.526 Associated with SR-IOV VF: No 00:14:26.526 Max Data Transfer Size: 524288 00:14:26.526 Max Number of Namespaces: 256 00:14:26.526 Max Number of I/O Queues: 64 00:14:26.526 NVMe Specification Version (VS): 1.4 00:14:26.526 NVMe Specification Version (Identify): 1.4 00:14:26.526 Maximum Queue Entries: 2048 00:14:26.526 Contiguous Queues Required: Yes 00:14:26.526 Arbitration Mechanisms Supported 00:14:26.526 Weighted Round Robin: Not Supported 00:14:26.526 Vendor Specific: Not Supported 00:14:26.526 Reset Timeout: 7500 ms 00:14:26.526 Doorbell Stride: 4 bytes 00:14:26.526 NVM Subsystem Reset: Not Supported 00:14:26.526 Command Sets Supported 00:14:26.526 NVM Command Set: Supported 00:14:26.526 Boot Partition: Not Supported 00:14:26.526 Memory Page Size Minimum: 4096 bytes 00:14:26.526 Memory Page Size Maximum: 65536 bytes 00:14:26.526 Persistent Memory Region: Not Supported 00:14:26.526 Optional Asynchronous Events Supported 00:14:26.526 Namespace Attribute Notices: Supported 00:14:26.526 Firmware Activation Notices: Not Supported 00:14:26.526 ANA Change Notices: Not Supported 00:14:26.526 PLE Aggregate Log Change Notices: Not Supported 00:14:26.526 LBA Status Info Alert Notices: Not Supported 00:14:26.526 EGE Aggregate Log Change Notices: Not Supported 00:14:26.526 Normal NVM Subsystem Shutdown event: Not Supported 00:14:26.526 Zone Descriptor Change Notices: Not Supported 00:14:26.526 Discovery Log Change Notices: Not Supported 00:14:26.526 Controller Attributes 00:14:26.526 128-bit Host Identifier: Not Supported 00:14:26.526 Non-Operational Permissive Mode: Not Supported 00:14:26.526 NVM Sets: Not Supported 00:14:26.526 Read Recovery Levels: Not Supported 00:14:26.526 Endurance Groups: Not Supported 00:14:26.526 Predictable Latency Mode: Not Supported 00:14:26.526 Traffic Based Keep ALive: Not Supported 00:14:26.526 Namespace Granularity: Not Supported 00:14:26.526 SQ Associations: Not Supported 00:14:26.526 UUID List: Not Supported 00:14:26.526 Multi-Domain Subsystem: Not Supported 00:14:26.526 Fixed Capacity Management: Not Supported 00:14:26.526 Variable Capacity Management: Not Supported 00:14:26.526 Delete Endurance Group: Not Supported 00:14:26.526 Delete NVM Set: Not Supported 00:14:26.526 Extended LBA Formats Supported: Supported 00:14:26.526 Flexible Data Placement Supported: Not Supported 00:14:26.526 00:14:26.526 Controller Memory Buffer Support 00:14:26.526 ================================ 00:14:26.526 Supported: No 00:14:26.526 00:14:26.526 Persistent Memory Region Support 00:14:26.526 ================================ 00:14:26.526 Supported: No 00:14:26.526 00:14:26.526 Admin Command Set Attributes 00:14:26.526 ============================ 00:14:26.526 Security Send/Receive: Not Supported 00:14:26.526 Format NVM: Supported 00:14:26.526 Firmware Activate/Download: Not Supported 00:14:26.526 Namespace Management: Supported 00:14:26.526 Device Self-Test: Not Supported 00:14:26.526 Directives: Supported 00:14:26.526 NVMe-MI: Not Supported 00:14:26.526 Virtualization Management: Not Supported 00:14:26.526 Doorbell Buffer Config: Supported 00:14:26.526 Get LBA Status Capability: Not Supported 00:14:26.526 Command & Feature Lockdown Capability: Not Supported 00:14:26.526 Abort Command Limit: 4 00:14:26.526 Async Event Request Limit: 4 00:14:26.526 Number of Firmware Slots: N/A 00:14:26.526 Firmware Slot 1 Read-Only: N/A 00:14:26.526 Firmware Activation Without Reset: N/A 00:14:26.526 Multiple Update Detection Support: N/A 00:14:26.526 Firmware Update Granularity: No Information Provided 00:14:26.526 Per-Namespace SMART Log: Yes 00:14:26.526 Asymmetric Namespace Access Log Page: Not Supported 00:14:26.526 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:14:26.526 Command Effects Log Page: Supported 00:14:26.526 Get Log Page Extended Data: Supported 00:14:26.526 Telemetry Log Pages: Not Supported 00:14:26.526 Persistent Event Log Pages: Not Supported 00:14:26.526 Supported Log Pages Log Page: May Support 00:14:26.526 Commands Supported & Effects Log Page: Not Supported 00:14:26.526 Feature Identifiers & Effects Log Page:May Support 00:14:26.526 NVMe-MI Commands & Effects Log Page: May Support 00:14:26.526 Data Area 4 for Telemetry Log: Not Supported 00:14:26.526 Error Log Page Entries Supported: 1 00:14:26.526 Keep Alive: Not Supported 00:14:26.526 00:14:26.526 NVM Command Set Attributes 00:14:26.526 ========================== 00:14:26.526 Submission Queue Entry Size 00:14:26.526 Max: 64 00:14:26.526 Min: 64 00:14:26.526 Completion Queue Entry Size 00:14:26.526 Max: 16 00:14:26.526 Min: 16 00:14:26.526 Number of Namespaces: 256 00:14:26.526 Compare Command: Supported 00:14:26.526 Write Uncorrectable Command: Not Supported 00:14:26.526 Dataset Management Command: Supported 00:14:26.526 Write Zeroes Command: Supported 00:14:26.526 Set Features Save Field: Supported 00:14:26.526 Reservations: Not Supported 00:14:26.526 Timestamp: Supported 00:14:26.526 Copy: Supported 00:14:26.526 Volatile Write Cache: Present 00:14:26.526 Atomic Write Unit (Normal): 1 00:14:26.526 Atomic Write Unit (PFail): 1 00:14:26.526 Atomic Compare & Write Unit: 1 00:14:26.526 Fused Compare & Write: Not Supported 00:14:26.526 Scatter-Gather List 00:14:26.526 SGL Command Set: Supported 00:14:26.526 SGL Keyed: Not Supported 00:14:26.526 SGL Bit Bucket Descriptor: Not Supported 00:14:26.526 SGL Metadata Pointer: Not Supported 00:14:26.526 Oversized SGL: Not Supported 00:14:26.526 SGL Metadata Address: Not Supported 00:14:26.526 SGL Offset: Not Supported 00:14:26.526 Transport SGL Data Block: Not Supported 00:14:26.526 Replay Protected Memory Block: Not Supported 00:14:26.526 00:14:26.526 Firmware Slot Information 00:14:26.526 ========================= 00:14:26.526 Active slot: 1 00:14:26.526 Slot 1 Firmware Revision: 1.0 00:14:26.526 00:14:26.526 00:14:26.526 Commands Supported and Effects 00:14:26.526 ============================== 00:14:26.526 Admin Commands 00:14:26.526 -------------- 00:14:26.526 Delete I/O Submission Queue (00h): Supported 00:14:26.526 Create I/O Submission Queue (01h): Supported 00:14:26.526 Get Log Page (02h): Supported 00:14:26.526 Delete I/O Completion Queue (04h): Supported 00:14:26.526 Create I/O Completion Queue (05h): Supported 00:14:26.526 Identify (06h): Supported 00:14:26.526 Abort (08h): Supported 00:14:26.526 Set Features (09h): Supported 00:14:26.526 Get Features (0Ah): Supported 00:14:26.526 Asynchronous Event Request (0Ch): Supported 00:14:26.526 Namespace Attachment (15h): Supported NS-Inventory-Change 00:14:26.526 Directive Send (19h): Supported 00:14:26.526 Directive Receive (1Ah): Supported 00:14:26.526 Virtualization Management (1Ch): Supported 00:14:26.526 Doorbell Buffer Config (7Ch): Supported 00:14:26.526 Format NVM (80h): Supported LBA-Change 00:14:26.526 I/O Commands 00:14:26.526 ------------ 00:14:26.526 Flush (00h): Supported LBA-Change 00:14:26.526 Write (01h): Supported LBA-Change 00:14:26.526 Read (02h): Supported 00:14:26.526 Compare (05h): Supported 00:14:26.526 Write Zeroes (08h): Supported LBA-Change 00:14:26.526 Dataset Management (09h): Supported LBA-Change 00:14:26.526 Unknown (0Ch): Supported 00:14:26.526 Unknown (12h): Supported 00:14:26.526 Copy (19h): Supported LBA-Change 00:14:26.526 Unknown (1Dh): Supported LBA-Change 00:14:26.526 00:14:26.526 Error Log 00:14:26.526 ========= 00:14:26.526 00:14:26.526 Arbitration 00:14:26.526 =========== 00:14:26.526 Arbitration Burst: no limit 00:14:26.526 00:14:26.526 Power Management 00:14:26.526 ================ 00:14:26.526 Number of Power States: 1 00:14:26.526 Current Power State: Power State #0 00:14:26.526 Power State #0: 00:14:26.526 Max Power: 25.00 W 00:14:26.526 Non-Operational State: Operational 00:14:26.526 Entry Latency: 16 microseconds 00:14:26.526 Exit Latency: 4 microseconds 00:14:26.526 Relative Read Throughput: 0 00:14:26.526 Relative Read Latency: 0 00:14:26.526 Relative Write Throughput: 0 00:14:26.526 Relative Write Latency: 0 00:14:26.785 Idle Power: Not Reported 00:14:26.785 Active Power: Not Reported 00:14:26.785 Non-Operational Permissive Mode: Not Supported 00:14:26.785 00:14:26.785 Health Information 00:14:26.785 ================== 00:14:26.785 Critical Warnings: 00:14:26.785 Available Spare Space: OK 00:14:26.785 Temperature: OK 00:14:26.785 Device Reliability: OK 00:14:26.785 Read Only: No 00:14:26.785 Volatile Memory Backup: OK 00:14:26.785 Current Temperature: 323 Kelvin (50 Celsius) 00:14:26.785 Temperature Threshold: 343 Kelvin (70 Celsius) 00:14:26.785 Available Spare: 0% 00:14:26.785 Available Spare Threshold: 0% 00:14:26.785 Life Percentage Used: 0% 00:14:26.785 Data Units Read: 646 00:14:26.785 Data Units Written: 574 00:14:26.785 Host Read Commands: 28464 00:14:26.785 Host Write Commands: 28250 00:14:26.785 Controller Busy Time: 0 minutes 00:14:26.785 Power Cycles: 0 00:14:26.785 Power On Hours: 0 hours 00:14:26.785 Unsafe Shutdowns: 0 00:14:26.785 Unrecoverable Media Errors: 0 00:14:26.785 Lifetime Error Log Entries: 0 00:14:26.785 Warning Temperature Time: 0 minutes 00:14:26.785 Critical Temperature Time: 0 minutes 00:14:26.785 00:14:26.785 Number of Queues 00:14:26.785 ================ 00:14:26.785 Number of I/O Submission Queues: 64 00:14:26.785 Number of I/O Completion Queues: 64 00:14:26.785 00:14:26.785 ZNS Specific Controller Data 00:14:26.785 ============================ 00:14:26.785 Zone Append Size Limit: 0 00:14:26.785 00:14:26.785 00:14:26.785 Active Namespaces 00:14:26.785 ================= 00:14:26.785 Namespace ID:1 00:14:26.785 Error Recovery Timeout: Unlimited 00:14:26.785 Command Set Identifier: NVM (00h) 00:14:26.785 Deallocate: Supported 00:14:26.785 Deallocated/Unwritten Error: Supported 00:14:26.785 Deallocated Read Value: All 0x00 00:14:26.785 Deallocate in Write Zeroes: Not Supported 00:14:26.786 Deallocated Guard Field: 0xFFFF 00:14:26.786 Flush: Supported 00:14:26.786 Reservation: Not Supported 00:14:26.786 Metadata Transferred as: Separate Metadata Buffer 00:14:26.786 Namespace Sharing Capabilities: Private 00:14:26.786 Size (in LBAs): 1548666 (5GiB) 00:14:26.786 Capacity (in LBAs): 1548666 (5GiB) 00:14:26.786 Utilization (in LBAs): 1548666 (5GiB) 00:14:26.786 Thin Provisioning: Not Supported 00:14:26.786 Per-NS Atomic Units: No 00:14:26.786 Maximum Single Source Range Length: 128 00:14:26.786 Maximum Copy Length: 128 00:14:26.786 Maximum Source Range Count: 128 00:14:26.786 NGUID/EUI64 Never Reused: No 00:14:26.786 Namespace Write Protected: No 00:14:26.786 Number of LBA Formats: 8 00:14:26.786 Current LBA Format: LBA Format #07 00:14:26.786 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:26.786 LBA Format #01: Data Size: 512 Metadata Size: 8 00:14:26.786 LBA Format #02: Data Size: 512 Metadata Size: 16 00:14:26.786 LBA Format #03: Data Size: 512 Metadata Size: 64 00:14:26.786 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:14:26.786 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:14:26.786 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:14:26.786 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:14:26.786 00:14:26.786 NVM Specific Namespace Data 00:14:26.786 =========================== 00:14:26.786 Logical Block Storage Tag Mask: 0 00:14:26.786 Protection Information Capabilities: 00:14:26.786 16b Guard Protection Information Storage Tag Support: No 00:14:26.786 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:14:26.786 Storage Tag Check Read Support: No 00:14:26.786 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:26.786 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:26.786 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:26.786 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:26.786 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:26.786 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:26.786 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:26.786 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:26.786 10:05:57 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:14:26.786 10:05:57 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:14:27.045 ===================================================== 00:14:27.045 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:14:27.045 ===================================================== 00:14:27.045 Controller Capabilities/Features 00:14:27.045 ================================ 00:14:27.045 Vendor ID: 1b36 00:14:27.045 Subsystem Vendor ID: 1af4 00:14:27.045 Serial Number: 12341 00:14:27.045 Model Number: QEMU NVMe Ctrl 00:14:27.045 Firmware Version: 8.0.0 00:14:27.045 Recommended Arb Burst: 6 00:14:27.045 IEEE OUI Identifier: 00 54 52 00:14:27.045 Multi-path I/O 00:14:27.045 May have multiple subsystem ports: No 00:14:27.045 May have multiple controllers: No 00:14:27.045 Associated with SR-IOV VF: No 00:14:27.045 Max Data Transfer Size: 524288 00:14:27.045 Max Number of Namespaces: 256 00:14:27.045 Max Number of I/O Queues: 64 00:14:27.045 NVMe Specification Version (VS): 1.4 00:14:27.045 NVMe Specification Version (Identify): 1.4 00:14:27.045 Maximum Queue Entries: 2048 00:14:27.045 Contiguous Queues Required: Yes 00:14:27.045 Arbitration Mechanisms Supported 00:14:27.045 Weighted Round Robin: Not Supported 00:14:27.045 Vendor Specific: Not Supported 00:14:27.045 Reset Timeout: 7500 ms 00:14:27.045 Doorbell Stride: 4 bytes 00:14:27.045 NVM Subsystem Reset: Not Supported 00:14:27.045 Command Sets Supported 00:14:27.045 NVM Command Set: Supported 00:14:27.045 Boot Partition: Not Supported 00:14:27.045 Memory Page Size Minimum: 4096 bytes 00:14:27.045 Memory Page Size Maximum: 65536 bytes 00:14:27.045 Persistent Memory Region: Not Supported 00:14:27.045 Optional Asynchronous Events Supported 00:14:27.045 Namespace Attribute Notices: Supported 00:14:27.045 Firmware Activation Notices: Not Supported 00:14:27.045 ANA Change Notices: Not Supported 00:14:27.045 PLE Aggregate Log Change Notices: Not Supported 00:14:27.045 LBA Status Info Alert Notices: Not Supported 00:14:27.045 EGE Aggregate Log Change Notices: Not Supported 00:14:27.045 Normal NVM Subsystem Shutdown event: Not Supported 00:14:27.045 Zone Descriptor Change Notices: Not Supported 00:14:27.045 Discovery Log Change Notices: Not Supported 00:14:27.045 Controller Attributes 00:14:27.045 128-bit Host Identifier: Not Supported 00:14:27.045 Non-Operational Permissive Mode: Not Supported 00:14:27.045 NVM Sets: Not Supported 00:14:27.045 Read Recovery Levels: Not Supported 00:14:27.045 Endurance Groups: Not Supported 00:14:27.045 Predictable Latency Mode: Not Supported 00:14:27.045 Traffic Based Keep ALive: Not Supported 00:14:27.045 Namespace Granularity: Not Supported 00:14:27.045 SQ Associations: Not Supported 00:14:27.045 UUID List: Not Supported 00:14:27.045 Multi-Domain Subsystem: Not Supported 00:14:27.045 Fixed Capacity Management: Not Supported 00:14:27.045 Variable Capacity Management: Not Supported 00:14:27.045 Delete Endurance Group: Not Supported 00:14:27.045 Delete NVM Set: Not Supported 00:14:27.045 Extended LBA Formats Supported: Supported 00:14:27.045 Flexible Data Placement Supported: Not Supported 00:14:27.045 00:14:27.045 Controller Memory Buffer Support 00:14:27.045 ================================ 00:14:27.045 Supported: No 00:14:27.045 00:14:27.045 Persistent Memory Region Support 00:14:27.045 ================================ 00:14:27.045 Supported: No 00:14:27.045 00:14:27.045 Admin Command Set Attributes 00:14:27.045 ============================ 00:14:27.045 Security Send/Receive: Not Supported 00:14:27.045 Format NVM: Supported 00:14:27.045 Firmware Activate/Download: Not Supported 00:14:27.045 Namespace Management: Supported 00:14:27.045 Device Self-Test: Not Supported 00:14:27.045 Directives: Supported 00:14:27.045 NVMe-MI: Not Supported 00:14:27.045 Virtualization Management: Not Supported 00:14:27.045 Doorbell Buffer Config: Supported 00:14:27.045 Get LBA Status Capability: Not Supported 00:14:27.045 Command & Feature Lockdown Capability: Not Supported 00:14:27.045 Abort Command Limit: 4 00:14:27.045 Async Event Request Limit: 4 00:14:27.045 Number of Firmware Slots: N/A 00:14:27.045 Firmware Slot 1 Read-Only: N/A 00:14:27.045 Firmware Activation Without Reset: N/A 00:14:27.045 Multiple Update Detection Support: N/A 00:14:27.045 Firmware Update Granularity: No Information Provided 00:14:27.045 Per-Namespace SMART Log: Yes 00:14:27.045 Asymmetric Namespace Access Log Page: Not Supported 00:14:27.045 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:14:27.045 Command Effects Log Page: Supported 00:14:27.045 Get Log Page Extended Data: Supported 00:14:27.045 Telemetry Log Pages: Not Supported 00:14:27.045 Persistent Event Log Pages: Not Supported 00:14:27.045 Supported Log Pages Log Page: May Support 00:14:27.045 Commands Supported & Effects Log Page: Not Supported 00:14:27.045 Feature Identifiers & Effects Log Page:May Support 00:14:27.045 NVMe-MI Commands & Effects Log Page: May Support 00:14:27.045 Data Area 4 for Telemetry Log: Not Supported 00:14:27.045 Error Log Page Entries Supported: 1 00:14:27.045 Keep Alive: Not Supported 00:14:27.045 00:14:27.045 NVM Command Set Attributes 00:14:27.045 ========================== 00:14:27.045 Submission Queue Entry Size 00:14:27.045 Max: 64 00:14:27.045 Min: 64 00:14:27.045 Completion Queue Entry Size 00:14:27.045 Max: 16 00:14:27.045 Min: 16 00:14:27.045 Number of Namespaces: 256 00:14:27.045 Compare Command: Supported 00:14:27.045 Write Uncorrectable Command: Not Supported 00:14:27.045 Dataset Management Command: Supported 00:14:27.045 Write Zeroes Command: Supported 00:14:27.045 Set Features Save Field: Supported 00:14:27.045 Reservations: Not Supported 00:14:27.045 Timestamp: Supported 00:14:27.045 Copy: Supported 00:14:27.045 Volatile Write Cache: Present 00:14:27.045 Atomic Write Unit (Normal): 1 00:14:27.045 Atomic Write Unit (PFail): 1 00:14:27.045 Atomic Compare & Write Unit: 1 00:14:27.045 Fused Compare & Write: Not Supported 00:14:27.045 Scatter-Gather List 00:14:27.045 SGL Command Set: Supported 00:14:27.045 SGL Keyed: Not Supported 00:14:27.045 SGL Bit Bucket Descriptor: Not Supported 00:14:27.045 SGL Metadata Pointer: Not Supported 00:14:27.045 Oversized SGL: Not Supported 00:14:27.045 SGL Metadata Address: Not Supported 00:14:27.045 SGL Offset: Not Supported 00:14:27.045 Transport SGL Data Block: Not Supported 00:14:27.045 Replay Protected Memory Block: Not Supported 00:14:27.045 00:14:27.045 Firmware Slot Information 00:14:27.045 ========================= 00:14:27.045 Active slot: 1 00:14:27.045 Slot 1 Firmware Revision: 1.0 00:14:27.045 00:14:27.045 00:14:27.045 Commands Supported and Effects 00:14:27.045 ============================== 00:14:27.045 Admin Commands 00:14:27.045 -------------- 00:14:27.045 Delete I/O Submission Queue (00h): Supported 00:14:27.045 Create I/O Submission Queue (01h): Supported 00:14:27.046 Get Log Page (02h): Supported 00:14:27.046 Delete I/O Completion Queue (04h): Supported 00:14:27.046 Create I/O Completion Queue (05h): Supported 00:14:27.046 Identify (06h): Supported 00:14:27.046 Abort (08h): Supported 00:14:27.046 Set Features (09h): Supported 00:14:27.046 Get Features (0Ah): Supported 00:14:27.046 Asynchronous Event Request (0Ch): Supported 00:14:27.046 Namespace Attachment (15h): Supported NS-Inventory-Change 00:14:27.046 Directive Send (19h): Supported 00:14:27.046 Directive Receive (1Ah): Supported 00:14:27.046 Virtualization Management (1Ch): Supported 00:14:27.046 Doorbell Buffer Config (7Ch): Supported 00:14:27.046 Format NVM (80h): Supported LBA-Change 00:14:27.046 I/O Commands 00:14:27.046 ------------ 00:14:27.046 Flush (00h): Supported LBA-Change 00:14:27.046 Write (01h): Supported LBA-Change 00:14:27.046 Read (02h): Supported 00:14:27.046 Compare (05h): Supported 00:14:27.046 Write Zeroes (08h): Supported LBA-Change 00:14:27.046 Dataset Management (09h): Supported LBA-Change 00:14:27.046 Unknown (0Ch): Supported 00:14:27.046 Unknown (12h): Supported 00:14:27.046 Copy (19h): Supported LBA-Change 00:14:27.046 Unknown (1Dh): Supported LBA-Change 00:14:27.046 00:14:27.046 Error Log 00:14:27.046 ========= 00:14:27.046 00:14:27.046 Arbitration 00:14:27.046 =========== 00:14:27.046 Arbitration Burst: no limit 00:14:27.046 00:14:27.046 Power Management 00:14:27.046 ================ 00:14:27.046 Number of Power States: 1 00:14:27.046 Current Power State: Power State #0 00:14:27.046 Power State #0: 00:14:27.046 Max Power: 25.00 W 00:14:27.046 Non-Operational State: Operational 00:14:27.046 Entry Latency: 16 microseconds 00:14:27.046 Exit Latency: 4 microseconds 00:14:27.046 Relative Read Throughput: 0 00:14:27.046 Relative Read Latency: 0 00:14:27.046 Relative Write Throughput: 0 00:14:27.046 Relative Write Latency: 0 00:14:27.046 Idle Power: Not Reported 00:14:27.046 Active Power: Not Reported 00:14:27.046 Non-Operational Permissive Mode: Not Supported 00:14:27.046 00:14:27.046 Health Information 00:14:27.046 ================== 00:14:27.046 Critical Warnings: 00:14:27.046 Available Spare Space: OK 00:14:27.046 Temperature: OK 00:14:27.046 Device Reliability: OK 00:14:27.046 Read Only: No 00:14:27.046 Volatile Memory Backup: OK 00:14:27.046 Current Temperature: 323 Kelvin (50 Celsius) 00:14:27.046 Temperature Threshold: 343 Kelvin (70 Celsius) 00:14:27.046 Available Spare: 0% 00:14:27.046 Available Spare Threshold: 0% 00:14:27.046 Life Percentage Used: 0% 00:14:27.046 Data Units Read: 990 00:14:27.046 Data Units Written: 850 00:14:27.046 Host Read Commands: 41918 00:14:27.046 Host Write Commands: 40611 00:14:27.046 Controller Busy Time: 0 minutes 00:14:27.046 Power Cycles: 0 00:14:27.046 Power On Hours: 0 hours 00:14:27.046 Unsafe Shutdowns: 0 00:14:27.046 Unrecoverable Media Errors: 0 00:14:27.046 Lifetime Error Log Entries: 0 00:14:27.046 Warning Temperature Time: 0 minutes 00:14:27.046 Critical Temperature Time: 0 minutes 00:14:27.046 00:14:27.046 Number of Queues 00:14:27.046 ================ 00:14:27.046 Number of I/O Submission Queues: 64 00:14:27.046 Number of I/O Completion Queues: 64 00:14:27.046 00:14:27.046 ZNS Specific Controller Data 00:14:27.046 ============================ 00:14:27.046 Zone Append Size Limit: 0 00:14:27.046 00:14:27.046 00:14:27.046 Active Namespaces 00:14:27.046 ================= 00:14:27.046 Namespace ID:1 00:14:27.046 Error Recovery Timeout: Unlimited 00:14:27.046 Command Set Identifier: NVM (00h) 00:14:27.046 Deallocate: Supported 00:14:27.046 Deallocated/Unwritten Error: Supported 00:14:27.046 Deallocated Read Value: All 0x00 00:14:27.046 Deallocate in Write Zeroes: Not Supported 00:14:27.046 Deallocated Guard Field: 0xFFFF 00:14:27.046 Flush: Supported 00:14:27.046 Reservation: Not Supported 00:14:27.046 Namespace Sharing Capabilities: Private 00:14:27.046 Size (in LBAs): 1310720 (5GiB) 00:14:27.046 Capacity (in LBAs): 1310720 (5GiB) 00:14:27.046 Utilization (in LBAs): 1310720 (5GiB) 00:14:27.046 Thin Provisioning: Not Supported 00:14:27.046 Per-NS Atomic Units: No 00:14:27.046 Maximum Single Source Range Length: 128 00:14:27.046 Maximum Copy Length: 128 00:14:27.046 Maximum Source Range Count: 128 00:14:27.046 NGUID/EUI64 Never Reused: No 00:14:27.046 Namespace Write Protected: No 00:14:27.046 Number of LBA Formats: 8 00:14:27.046 Current LBA Format: LBA Format #04 00:14:27.046 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:27.046 LBA Format #01: Data Size: 512 Metadata Size: 8 00:14:27.046 LBA Format #02: Data Size: 512 Metadata Size: 16 00:14:27.046 LBA Format #03: Data Size: 512 Metadata Size: 64 00:14:27.046 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:14:27.046 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:14:27.046 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:14:27.046 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:14:27.046 00:14:27.046 NVM Specific Namespace Data 00:14:27.046 =========================== 00:14:27.046 Logical Block Storage Tag Mask: 0 00:14:27.046 Protection Information Capabilities: 00:14:27.046 16b Guard Protection Information Storage Tag Support: No 00:14:27.046 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:14:27.046 Storage Tag Check Read Support: No 00:14:27.046 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:27.046 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:27.046 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:27.046 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:27.046 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:27.046 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:27.046 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:27.046 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:27.046 10:05:57 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:14:27.046 10:05:57 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:14:27.305 ===================================================== 00:14:27.305 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:14:27.305 ===================================================== 00:14:27.305 Controller Capabilities/Features 00:14:27.305 ================================ 00:14:27.305 Vendor ID: 1b36 00:14:27.305 Subsystem Vendor ID: 1af4 00:14:27.305 Serial Number: 12342 00:14:27.305 Model Number: QEMU NVMe Ctrl 00:14:27.305 Firmware Version: 8.0.0 00:14:27.305 Recommended Arb Burst: 6 00:14:27.305 IEEE OUI Identifier: 00 54 52 00:14:27.305 Multi-path I/O 00:14:27.305 May have multiple subsystem ports: No 00:14:27.305 May have multiple controllers: No 00:14:27.305 Associated with SR-IOV VF: No 00:14:27.305 Max Data Transfer Size: 524288 00:14:27.305 Max Number of Namespaces: 256 00:14:27.305 Max Number of I/O Queues: 64 00:14:27.305 NVMe Specification Version (VS): 1.4 00:14:27.305 NVMe Specification Version (Identify): 1.4 00:14:27.305 Maximum Queue Entries: 2048 00:14:27.305 Contiguous Queues Required: Yes 00:14:27.305 Arbitration Mechanisms Supported 00:14:27.305 Weighted Round Robin: Not Supported 00:14:27.305 Vendor Specific: Not Supported 00:14:27.305 Reset Timeout: 7500 ms 00:14:27.305 Doorbell Stride: 4 bytes 00:14:27.306 NVM Subsystem Reset: Not Supported 00:14:27.306 Command Sets Supported 00:14:27.306 NVM Command Set: Supported 00:14:27.306 Boot Partition: Not Supported 00:14:27.306 Memory Page Size Minimum: 4096 bytes 00:14:27.306 Memory Page Size Maximum: 65536 bytes 00:14:27.306 Persistent Memory Region: Not Supported 00:14:27.306 Optional Asynchronous Events Supported 00:14:27.306 Namespace Attribute Notices: Supported 00:14:27.306 Firmware Activation Notices: Not Supported 00:14:27.306 ANA Change Notices: Not Supported 00:14:27.306 PLE Aggregate Log Change Notices: Not Supported 00:14:27.306 LBA Status Info Alert Notices: Not Supported 00:14:27.306 EGE Aggregate Log Change Notices: Not Supported 00:14:27.306 Normal NVM Subsystem Shutdown event: Not Supported 00:14:27.306 Zone Descriptor Change Notices: Not Supported 00:14:27.306 Discovery Log Change Notices: Not Supported 00:14:27.306 Controller Attributes 00:14:27.306 128-bit Host Identifier: Not Supported 00:14:27.306 Non-Operational Permissive Mode: Not Supported 00:14:27.306 NVM Sets: Not Supported 00:14:27.306 Read Recovery Levels: Not Supported 00:14:27.306 Endurance Groups: Not Supported 00:14:27.306 Predictable Latency Mode: Not Supported 00:14:27.306 Traffic Based Keep ALive: Not Supported 00:14:27.306 Namespace Granularity: Not Supported 00:14:27.306 SQ Associations: Not Supported 00:14:27.306 UUID List: Not Supported 00:14:27.306 Multi-Domain Subsystem: Not Supported 00:14:27.306 Fixed Capacity Management: Not Supported 00:14:27.306 Variable Capacity Management: Not Supported 00:14:27.306 Delete Endurance Group: Not Supported 00:14:27.306 Delete NVM Set: Not Supported 00:14:27.306 Extended LBA Formats Supported: Supported 00:14:27.306 Flexible Data Placement Supported: Not Supported 00:14:27.306 00:14:27.306 Controller Memory Buffer Support 00:14:27.306 ================================ 00:14:27.306 Supported: No 00:14:27.306 00:14:27.306 Persistent Memory Region Support 00:14:27.306 ================================ 00:14:27.306 Supported: No 00:14:27.306 00:14:27.306 Admin Command Set Attributes 00:14:27.306 ============================ 00:14:27.306 Security Send/Receive: Not Supported 00:14:27.306 Format NVM: Supported 00:14:27.306 Firmware Activate/Download: Not Supported 00:14:27.306 Namespace Management: Supported 00:14:27.306 Device Self-Test: Not Supported 00:14:27.306 Directives: Supported 00:14:27.306 NVMe-MI: Not Supported 00:14:27.306 Virtualization Management: Not Supported 00:14:27.306 Doorbell Buffer Config: Supported 00:14:27.306 Get LBA Status Capability: Not Supported 00:14:27.306 Command & Feature Lockdown Capability: Not Supported 00:14:27.306 Abort Command Limit: 4 00:14:27.306 Async Event Request Limit: 4 00:14:27.306 Number of Firmware Slots: N/A 00:14:27.306 Firmware Slot 1 Read-Only: N/A 00:14:27.306 Firmware Activation Without Reset: N/A 00:14:27.306 Multiple Update Detection Support: N/A 00:14:27.306 Firmware Update Granularity: No Information Provided 00:14:27.306 Per-Namespace SMART Log: Yes 00:14:27.306 Asymmetric Namespace Access Log Page: Not Supported 00:14:27.306 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:14:27.306 Command Effects Log Page: Supported 00:14:27.306 Get Log Page Extended Data: Supported 00:14:27.306 Telemetry Log Pages: Not Supported 00:14:27.306 Persistent Event Log Pages: Not Supported 00:14:27.306 Supported Log Pages Log Page: May Support 00:14:27.306 Commands Supported & Effects Log Page: Not Supported 00:14:27.306 Feature Identifiers & Effects Log Page:May Support 00:14:27.306 NVMe-MI Commands & Effects Log Page: May Support 00:14:27.306 Data Area 4 for Telemetry Log: Not Supported 00:14:27.306 Error Log Page Entries Supported: 1 00:14:27.306 Keep Alive: Not Supported 00:14:27.306 00:14:27.306 NVM Command Set Attributes 00:14:27.306 ========================== 00:14:27.306 Submission Queue Entry Size 00:14:27.306 Max: 64 00:14:27.306 Min: 64 00:14:27.306 Completion Queue Entry Size 00:14:27.306 Max: 16 00:14:27.306 Min: 16 00:14:27.306 Number of Namespaces: 256 00:14:27.306 Compare Command: Supported 00:14:27.306 Write Uncorrectable Command: Not Supported 00:14:27.306 Dataset Management Command: Supported 00:14:27.306 Write Zeroes Command: Supported 00:14:27.306 Set Features Save Field: Supported 00:14:27.306 Reservations: Not Supported 00:14:27.306 Timestamp: Supported 00:14:27.306 Copy: Supported 00:14:27.306 Volatile Write Cache: Present 00:14:27.306 Atomic Write Unit (Normal): 1 00:14:27.306 Atomic Write Unit (PFail): 1 00:14:27.306 Atomic Compare & Write Unit: 1 00:14:27.306 Fused Compare & Write: Not Supported 00:14:27.306 Scatter-Gather List 00:14:27.306 SGL Command Set: Supported 00:14:27.306 SGL Keyed: Not Supported 00:14:27.306 SGL Bit Bucket Descriptor: Not Supported 00:14:27.306 SGL Metadata Pointer: Not Supported 00:14:27.306 Oversized SGL: Not Supported 00:14:27.306 SGL Metadata Address: Not Supported 00:14:27.306 SGL Offset: Not Supported 00:14:27.306 Transport SGL Data Block: Not Supported 00:14:27.306 Replay Protected Memory Block: Not Supported 00:14:27.306 00:14:27.306 Firmware Slot Information 00:14:27.306 ========================= 00:14:27.306 Active slot: 1 00:14:27.306 Slot 1 Firmware Revision: 1.0 00:14:27.306 00:14:27.306 00:14:27.306 Commands Supported and Effects 00:14:27.306 ============================== 00:14:27.306 Admin Commands 00:14:27.306 -------------- 00:14:27.306 Delete I/O Submission Queue (00h): Supported 00:14:27.306 Create I/O Submission Queue (01h): Supported 00:14:27.306 Get Log Page (02h): Supported 00:14:27.306 Delete I/O Completion Queue (04h): Supported 00:14:27.306 Create I/O Completion Queue (05h): Supported 00:14:27.306 Identify (06h): Supported 00:14:27.306 Abort (08h): Supported 00:14:27.306 Set Features (09h): Supported 00:14:27.306 Get Features (0Ah): Supported 00:14:27.306 Asynchronous Event Request (0Ch): Supported 00:14:27.306 Namespace Attachment (15h): Supported NS-Inventory-Change 00:14:27.306 Directive Send (19h): Supported 00:14:27.306 Directive Receive (1Ah): Supported 00:14:27.306 Virtualization Management (1Ch): Supported 00:14:27.306 Doorbell Buffer Config (7Ch): Supported 00:14:27.306 Format NVM (80h): Supported LBA-Change 00:14:27.306 I/O Commands 00:14:27.306 ------------ 00:14:27.306 Flush (00h): Supported LBA-Change 00:14:27.306 Write (01h): Supported LBA-Change 00:14:27.306 Read (02h): Supported 00:14:27.306 Compare (05h): Supported 00:14:27.306 Write Zeroes (08h): Supported LBA-Change 00:14:27.306 Dataset Management (09h): Supported LBA-Change 00:14:27.306 Unknown (0Ch): Supported 00:14:27.306 Unknown (12h): Supported 00:14:27.306 Copy (19h): Supported LBA-Change 00:14:27.306 Unknown (1Dh): Supported LBA-Change 00:14:27.306 00:14:27.306 Error Log 00:14:27.306 ========= 00:14:27.306 00:14:27.306 Arbitration 00:14:27.306 =========== 00:14:27.306 Arbitration Burst: no limit 00:14:27.306 00:14:27.306 Power Management 00:14:27.306 ================ 00:14:27.306 Number of Power States: 1 00:14:27.306 Current Power State: Power State #0 00:14:27.306 Power State #0: 00:14:27.306 Max Power: 25.00 W 00:14:27.306 Non-Operational State: Operational 00:14:27.306 Entry Latency: 16 microseconds 00:14:27.306 Exit Latency: 4 microseconds 00:14:27.306 Relative Read Throughput: 0 00:14:27.306 Relative Read Latency: 0 00:14:27.306 Relative Write Throughput: 0 00:14:27.306 Relative Write Latency: 0 00:14:27.306 Idle Power: Not Reported 00:14:27.306 Active Power: Not Reported 00:14:27.306 Non-Operational Permissive Mode: Not Supported 00:14:27.306 00:14:27.306 Health Information 00:14:27.306 ================== 00:14:27.306 Critical Warnings: 00:14:27.306 Available Spare Space: OK 00:14:27.306 Temperature: OK 00:14:27.306 Device Reliability: OK 00:14:27.306 Read Only: No 00:14:27.306 Volatile Memory Backup: OK 00:14:27.306 Current Temperature: 323 Kelvin (50 Celsius) 00:14:27.306 Temperature Threshold: 343 Kelvin (70 Celsius) 00:14:27.306 Available Spare: 0% 00:14:27.306 Available Spare Threshold: 0% 00:14:27.306 Life Percentage Used: 0% 00:14:27.306 Data Units Read: 2023 00:14:27.306 Data Units Written: 1810 00:14:27.306 Host Read Commands: 86843 00:14:27.306 Host Write Commands: 85112 00:14:27.306 Controller Busy Time: 0 minutes 00:14:27.306 Power Cycles: 0 00:14:27.306 Power On Hours: 0 hours 00:14:27.306 Unsafe Shutdowns: 0 00:14:27.306 Unrecoverable Media Errors: 0 00:14:27.306 Lifetime Error Log Entries: 0 00:14:27.306 Warning Temperature Time: 0 minutes 00:14:27.306 Critical Temperature Time: 0 minutes 00:14:27.306 00:14:27.306 Number of Queues 00:14:27.306 ================ 00:14:27.306 Number of I/O Submission Queues: 64 00:14:27.306 Number of I/O Completion Queues: 64 00:14:27.306 00:14:27.306 ZNS Specific Controller Data 00:14:27.306 ============================ 00:14:27.306 Zone Append Size Limit: 0 00:14:27.306 00:14:27.306 00:14:27.306 Active Namespaces 00:14:27.306 ================= 00:14:27.307 Namespace ID:1 00:14:27.307 Error Recovery Timeout: Unlimited 00:14:27.307 Command Set Identifier: NVM (00h) 00:14:27.307 Deallocate: Supported 00:14:27.307 Deallocated/Unwritten Error: Supported 00:14:27.307 Deallocated Read Value: All 0x00 00:14:27.307 Deallocate in Write Zeroes: Not Supported 00:14:27.307 Deallocated Guard Field: 0xFFFF 00:14:27.307 Flush: Supported 00:14:27.307 Reservation: Not Supported 00:14:27.307 Namespace Sharing Capabilities: Private 00:14:27.307 Size (in LBAs): 1048576 (4GiB) 00:14:27.307 Capacity (in LBAs): 1048576 (4GiB) 00:14:27.307 Utilization (in LBAs): 1048576 (4GiB) 00:14:27.307 Thin Provisioning: Not Supported 00:14:27.307 Per-NS Atomic Units: No 00:14:27.307 Maximum Single Source Range Length: 128 00:14:27.307 Maximum Copy Length: 128 00:14:27.307 Maximum Source Range Count: 128 00:14:27.307 NGUID/EUI64 Never Reused: No 00:14:27.307 Namespace Write Protected: No 00:14:27.307 Number of LBA Formats: 8 00:14:27.307 Current LBA Format: LBA Format #04 00:14:27.307 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:27.307 LBA Format #01: Data Size: 512 Metadata Size: 8 00:14:27.307 LBA Format #02: Data Size: 512 Metadata Size: 16 00:14:27.307 LBA Format #03: Data Size: 512 Metadata Size: 64 00:14:27.307 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:14:27.307 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:14:27.307 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:14:27.307 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:14:27.307 00:14:27.307 NVM Specific Namespace Data 00:14:27.307 =========================== 00:14:27.307 Logical Block Storage Tag Mask: 0 00:14:27.307 Protection Information Capabilities: 00:14:27.307 16b Guard Protection Information Storage Tag Support: No 00:14:27.307 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:14:27.307 Storage Tag Check Read Support: No 00:14:27.307 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:27.307 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:27.307 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:27.307 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:27.307 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:27.307 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:27.307 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:27.307 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:27.307 Namespace ID:2 00:14:27.307 Error Recovery Timeout: Unlimited 00:14:27.307 Command Set Identifier: NVM (00h) 00:14:27.307 Deallocate: Supported 00:14:27.307 Deallocated/Unwritten Error: Supported 00:14:27.307 Deallocated Read Value: All 0x00 00:14:27.307 Deallocate in Write Zeroes: Not Supported 00:14:27.307 Deallocated Guard Field: 0xFFFF 00:14:27.307 Flush: Supported 00:14:27.307 Reservation: Not Supported 00:14:27.307 Namespace Sharing Capabilities: Private 00:14:27.307 Size (in LBAs): 1048576 (4GiB) 00:14:27.307 Capacity (in LBAs): 1048576 (4GiB) 00:14:27.307 Utilization (in LBAs): 1048576 (4GiB) 00:14:27.307 Thin Provisioning: Not Supported 00:14:27.307 Per-NS Atomic Units: No 00:14:27.307 Maximum Single Source Range Length: 128 00:14:27.307 Maximum Copy Length: 128 00:14:27.307 Maximum Source Range Count: 128 00:14:27.307 NGUID/EUI64 Never Reused: No 00:14:27.307 Namespace Write Protected: No 00:14:27.307 Number of LBA Formats: 8 00:14:27.307 Current LBA Format: LBA Format #04 00:14:27.307 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:27.307 LBA Format #01: Data Size: 512 Metadata Size: 8 00:14:27.307 LBA Format #02: Data Size: 512 Metadata Size: 16 00:14:27.307 LBA Format #03: Data Size: 512 Metadata Size: 64 00:14:27.307 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:14:27.307 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:14:27.307 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:14:27.307 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:14:27.307 00:14:27.307 NVM Specific Namespace Data 00:14:27.307 =========================== 00:14:27.307 Logical Block Storage Tag Mask: 0 00:14:27.307 Protection Information Capabilities: 00:14:27.307 16b Guard Protection Information Storage Tag Support: No 00:14:27.307 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:14:27.307 Storage Tag Check Read Support: No 00:14:27.307 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:27.307 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:27.307 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:27.307 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:27.307 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:27.307 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:27.307 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:27.307 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:27.307 Namespace ID:3 00:14:27.307 Error Recovery Timeout: Unlimited 00:14:27.307 Command Set Identifier: NVM (00h) 00:14:27.307 Deallocate: Supported 00:14:27.307 Deallocated/Unwritten Error: Supported 00:14:27.307 Deallocated Read Value: All 0x00 00:14:27.307 Deallocate in Write Zeroes: Not Supported 00:14:27.307 Deallocated Guard Field: 0xFFFF 00:14:27.307 Flush: Supported 00:14:27.307 Reservation: Not Supported 00:14:27.307 Namespace Sharing Capabilities: Private 00:14:27.307 Size (in LBAs): 1048576 (4GiB) 00:14:27.307 Capacity (in LBAs): 1048576 (4GiB) 00:14:27.307 Utilization (in LBAs): 1048576 (4GiB) 00:14:27.307 Thin Provisioning: Not Supported 00:14:27.307 Per-NS Atomic Units: No 00:14:27.307 Maximum Single Source Range Length: 128 00:14:27.307 Maximum Copy Length: 128 00:14:27.307 Maximum Source Range Count: 128 00:14:27.307 NGUID/EUI64 Never Reused: No 00:14:27.307 Namespace Write Protected: No 00:14:27.307 Number of LBA Formats: 8 00:14:27.307 Current LBA Format: LBA Format #04 00:14:27.307 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:27.307 LBA Format #01: Data Size: 512 Metadata Size: 8 00:14:27.307 LBA Format #02: Data Size: 512 Metadata Size: 16 00:14:27.307 LBA Format #03: Data Size: 512 Metadata Size: 64 00:14:27.307 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:14:27.307 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:14:27.307 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:14:27.307 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:14:27.307 00:14:27.307 NVM Specific Namespace Data 00:14:27.307 =========================== 00:14:27.307 Logical Block Storage Tag Mask: 0 00:14:27.307 Protection Information Capabilities: 00:14:27.307 16b Guard Protection Information Storage Tag Support: No 00:14:27.307 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:14:27.567 Storage Tag Check Read Support: No 00:14:27.567 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:27.567 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:27.567 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:27.567 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:27.567 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:27.567 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:27.567 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:27.567 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:27.567 10:05:58 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:14:27.567 10:05:58 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:14:27.826 ===================================================== 00:14:27.826 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:14:27.826 ===================================================== 00:14:27.826 Controller Capabilities/Features 00:14:27.826 ================================ 00:14:27.826 Vendor ID: 1b36 00:14:27.826 Subsystem Vendor ID: 1af4 00:14:27.826 Serial Number: 12343 00:14:27.826 Model Number: QEMU NVMe Ctrl 00:14:27.826 Firmware Version: 8.0.0 00:14:27.826 Recommended Arb Burst: 6 00:14:27.826 IEEE OUI Identifier: 00 54 52 00:14:27.826 Multi-path I/O 00:14:27.826 May have multiple subsystem ports: No 00:14:27.826 May have multiple controllers: Yes 00:14:27.826 Associated with SR-IOV VF: No 00:14:27.826 Max Data Transfer Size: 524288 00:14:27.826 Max Number of Namespaces: 256 00:14:27.826 Max Number of I/O Queues: 64 00:14:27.826 NVMe Specification Version (VS): 1.4 00:14:27.826 NVMe Specification Version (Identify): 1.4 00:14:27.826 Maximum Queue Entries: 2048 00:14:27.826 Contiguous Queues Required: Yes 00:14:27.826 Arbitration Mechanisms Supported 00:14:27.826 Weighted Round Robin: Not Supported 00:14:27.826 Vendor Specific: Not Supported 00:14:27.826 Reset Timeout: 7500 ms 00:14:27.826 Doorbell Stride: 4 bytes 00:14:27.826 NVM Subsystem Reset: Not Supported 00:14:27.826 Command Sets Supported 00:14:27.826 NVM Command Set: Supported 00:14:27.826 Boot Partition: Not Supported 00:14:27.826 Memory Page Size Minimum: 4096 bytes 00:14:27.826 Memory Page Size Maximum: 65536 bytes 00:14:27.827 Persistent Memory Region: Not Supported 00:14:27.827 Optional Asynchronous Events Supported 00:14:27.827 Namespace Attribute Notices: Supported 00:14:27.827 Firmware Activation Notices: Not Supported 00:14:27.827 ANA Change Notices: Not Supported 00:14:27.827 PLE Aggregate Log Change Notices: Not Supported 00:14:27.827 LBA Status Info Alert Notices: Not Supported 00:14:27.827 EGE Aggregate Log Change Notices: Not Supported 00:14:27.827 Normal NVM Subsystem Shutdown event: Not Supported 00:14:27.827 Zone Descriptor Change Notices: Not Supported 00:14:27.827 Discovery Log Change Notices: Not Supported 00:14:27.827 Controller Attributes 00:14:27.827 128-bit Host Identifier: Not Supported 00:14:27.827 Non-Operational Permissive Mode: Not Supported 00:14:27.827 NVM Sets: Not Supported 00:14:27.827 Read Recovery Levels: Not Supported 00:14:27.827 Endurance Groups: Supported 00:14:27.827 Predictable Latency Mode: Not Supported 00:14:27.827 Traffic Based Keep ALive: Not Supported 00:14:27.827 Namespace Granularity: Not Supported 00:14:27.827 SQ Associations: Not Supported 00:14:27.827 UUID List: Not Supported 00:14:27.827 Multi-Domain Subsystem: Not Supported 00:14:27.827 Fixed Capacity Management: Not Supported 00:14:27.827 Variable Capacity Management: Not Supported 00:14:27.827 Delete Endurance Group: Not Supported 00:14:27.827 Delete NVM Set: Not Supported 00:14:27.827 Extended LBA Formats Supported: Supported 00:14:27.827 Flexible Data Placement Supported: Supported 00:14:27.827 00:14:27.827 Controller Memory Buffer Support 00:14:27.827 ================================ 00:14:27.827 Supported: No 00:14:27.827 00:14:27.827 Persistent Memory Region Support 00:14:27.827 ================================ 00:14:27.827 Supported: No 00:14:27.827 00:14:27.827 Admin Command Set Attributes 00:14:27.827 ============================ 00:14:27.827 Security Send/Receive: Not Supported 00:14:27.827 Format NVM: Supported 00:14:27.827 Firmware Activate/Download: Not Supported 00:14:27.827 Namespace Management: Supported 00:14:27.827 Device Self-Test: Not Supported 00:14:27.827 Directives: Supported 00:14:27.827 NVMe-MI: Not Supported 00:14:27.827 Virtualization Management: Not Supported 00:14:27.827 Doorbell Buffer Config: Supported 00:14:27.827 Get LBA Status Capability: Not Supported 00:14:27.827 Command & Feature Lockdown Capability: Not Supported 00:14:27.827 Abort Command Limit: 4 00:14:27.827 Async Event Request Limit: 4 00:14:27.827 Number of Firmware Slots: N/A 00:14:27.827 Firmware Slot 1 Read-Only: N/A 00:14:27.827 Firmware Activation Without Reset: N/A 00:14:27.827 Multiple Update Detection Support: N/A 00:14:27.827 Firmware Update Granularity: No Information Provided 00:14:27.827 Per-Namespace SMART Log: Yes 00:14:27.827 Asymmetric Namespace Access Log Page: Not Supported 00:14:27.827 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:14:27.827 Command Effects Log Page: Supported 00:14:27.827 Get Log Page Extended Data: Supported 00:14:27.827 Telemetry Log Pages: Not Supported 00:14:27.827 Persistent Event Log Pages: Not Supported 00:14:27.827 Supported Log Pages Log Page: May Support 00:14:27.827 Commands Supported & Effects Log Page: Not Supported 00:14:27.827 Feature Identifiers & Effects Log Page:May Support 00:14:27.827 NVMe-MI Commands & Effects Log Page: May Support 00:14:27.827 Data Area 4 for Telemetry Log: Not Supported 00:14:27.827 Error Log Page Entries Supported: 1 00:14:27.827 Keep Alive: Not Supported 00:14:27.827 00:14:27.827 NVM Command Set Attributes 00:14:27.827 ========================== 00:14:27.827 Submission Queue Entry Size 00:14:27.827 Max: 64 00:14:27.827 Min: 64 00:14:27.827 Completion Queue Entry Size 00:14:27.827 Max: 16 00:14:27.827 Min: 16 00:14:27.827 Number of Namespaces: 256 00:14:27.827 Compare Command: Supported 00:14:27.827 Write Uncorrectable Command: Not Supported 00:14:27.827 Dataset Management Command: Supported 00:14:27.827 Write Zeroes Command: Supported 00:14:27.827 Set Features Save Field: Supported 00:14:27.827 Reservations: Not Supported 00:14:27.827 Timestamp: Supported 00:14:27.827 Copy: Supported 00:14:27.827 Volatile Write Cache: Present 00:14:27.827 Atomic Write Unit (Normal): 1 00:14:27.827 Atomic Write Unit (PFail): 1 00:14:27.827 Atomic Compare & Write Unit: 1 00:14:27.827 Fused Compare & Write: Not Supported 00:14:27.827 Scatter-Gather List 00:14:27.827 SGL Command Set: Supported 00:14:27.827 SGL Keyed: Not Supported 00:14:27.827 SGL Bit Bucket Descriptor: Not Supported 00:14:27.827 SGL Metadata Pointer: Not Supported 00:14:27.827 Oversized SGL: Not Supported 00:14:27.827 SGL Metadata Address: Not Supported 00:14:27.827 SGL Offset: Not Supported 00:14:27.827 Transport SGL Data Block: Not Supported 00:14:27.827 Replay Protected Memory Block: Not Supported 00:14:27.827 00:14:27.827 Firmware Slot Information 00:14:27.827 ========================= 00:14:27.827 Active slot: 1 00:14:27.827 Slot 1 Firmware Revision: 1.0 00:14:27.827 00:14:27.827 00:14:27.827 Commands Supported and Effects 00:14:27.827 ============================== 00:14:27.827 Admin Commands 00:14:27.827 -------------- 00:14:27.827 Delete I/O Submission Queue (00h): Supported 00:14:27.827 Create I/O Submission Queue (01h): Supported 00:14:27.827 Get Log Page (02h): Supported 00:14:27.827 Delete I/O Completion Queue (04h): Supported 00:14:27.827 Create I/O Completion Queue (05h): Supported 00:14:27.827 Identify (06h): Supported 00:14:27.827 Abort (08h): Supported 00:14:27.827 Set Features (09h): Supported 00:14:27.827 Get Features (0Ah): Supported 00:14:27.827 Asynchronous Event Request (0Ch): Supported 00:14:27.827 Namespace Attachment (15h): Supported NS-Inventory-Change 00:14:27.827 Directive Send (19h): Supported 00:14:27.827 Directive Receive (1Ah): Supported 00:14:27.827 Virtualization Management (1Ch): Supported 00:14:27.827 Doorbell Buffer Config (7Ch): Supported 00:14:27.827 Format NVM (80h): Supported LBA-Change 00:14:27.827 I/O Commands 00:14:27.827 ------------ 00:14:27.827 Flush (00h): Supported LBA-Change 00:14:27.827 Write (01h): Supported LBA-Change 00:14:27.827 Read (02h): Supported 00:14:27.827 Compare (05h): Supported 00:14:27.827 Write Zeroes (08h): Supported LBA-Change 00:14:27.827 Dataset Management (09h): Supported LBA-Change 00:14:27.827 Unknown (0Ch): Supported 00:14:27.827 Unknown (12h): Supported 00:14:27.827 Copy (19h): Supported LBA-Change 00:14:27.827 Unknown (1Dh): Supported LBA-Change 00:14:27.827 00:14:27.827 Error Log 00:14:27.827 ========= 00:14:27.827 00:14:27.827 Arbitration 00:14:27.827 =========== 00:14:27.827 Arbitration Burst: no limit 00:14:27.827 00:14:27.827 Power Management 00:14:27.827 ================ 00:14:27.827 Number of Power States: 1 00:14:27.827 Current Power State: Power State #0 00:14:27.827 Power State #0: 00:14:27.827 Max Power: 25.00 W 00:14:27.827 Non-Operational State: Operational 00:14:27.827 Entry Latency: 16 microseconds 00:14:27.827 Exit Latency: 4 microseconds 00:14:27.827 Relative Read Throughput: 0 00:14:27.827 Relative Read Latency: 0 00:14:27.827 Relative Write Throughput: 0 00:14:27.827 Relative Write Latency: 0 00:14:27.827 Idle Power: Not Reported 00:14:27.827 Active Power: Not Reported 00:14:27.827 Non-Operational Permissive Mode: Not Supported 00:14:27.827 00:14:27.827 Health Information 00:14:27.827 ================== 00:14:27.827 Critical Warnings: 00:14:27.827 Available Spare Space: OK 00:14:27.827 Temperature: OK 00:14:27.827 Device Reliability: OK 00:14:27.827 Read Only: No 00:14:27.827 Volatile Memory Backup: OK 00:14:27.827 Current Temperature: 323 Kelvin (50 Celsius) 00:14:27.827 Temperature Threshold: 343 Kelvin (70 Celsius) 00:14:27.827 Available Spare: 0% 00:14:27.827 Available Spare Threshold: 0% 00:14:27.827 Life Percentage Used: 0% 00:14:27.827 Data Units Read: 744 00:14:27.827 Data Units Written: 673 00:14:27.827 Host Read Commands: 29711 00:14:27.827 Host Write Commands: 29134 00:14:27.827 Controller Busy Time: 0 minutes 00:14:27.827 Power Cycles: 0 00:14:27.827 Power On Hours: 0 hours 00:14:27.827 Unsafe Shutdowns: 0 00:14:27.827 Unrecoverable Media Errors: 0 00:14:27.827 Lifetime Error Log Entries: 0 00:14:27.827 Warning Temperature Time: 0 minutes 00:14:27.827 Critical Temperature Time: 0 minutes 00:14:27.827 00:14:27.827 Number of Queues 00:14:27.827 ================ 00:14:27.827 Number of I/O Submission Queues: 64 00:14:27.827 Number of I/O Completion Queues: 64 00:14:27.827 00:14:27.827 ZNS Specific Controller Data 00:14:27.827 ============================ 00:14:27.827 Zone Append Size Limit: 0 00:14:27.827 00:14:27.827 00:14:27.827 Active Namespaces 00:14:27.827 ================= 00:14:27.827 Namespace ID:1 00:14:27.827 Error Recovery Timeout: Unlimited 00:14:27.827 Command Set Identifier: NVM (00h) 00:14:27.827 Deallocate: Supported 00:14:27.827 Deallocated/Unwritten Error: Supported 00:14:27.828 Deallocated Read Value: All 0x00 00:14:27.828 Deallocate in Write Zeroes: Not Supported 00:14:27.828 Deallocated Guard Field: 0xFFFF 00:14:27.828 Flush: Supported 00:14:27.828 Reservation: Not Supported 00:14:27.828 Namespace Sharing Capabilities: Multiple Controllers 00:14:27.828 Size (in LBAs): 262144 (1GiB) 00:14:27.828 Capacity (in LBAs): 262144 (1GiB) 00:14:27.828 Utilization (in LBAs): 262144 (1GiB) 00:14:27.828 Thin Provisioning: Not Supported 00:14:27.828 Per-NS Atomic Units: No 00:14:27.828 Maximum Single Source Range Length: 128 00:14:27.828 Maximum Copy Length: 128 00:14:27.828 Maximum Source Range Count: 128 00:14:27.828 NGUID/EUI64 Never Reused: No 00:14:27.828 Namespace Write Protected: No 00:14:27.828 Endurance group ID: 1 00:14:27.828 Number of LBA Formats: 8 00:14:27.828 Current LBA Format: LBA Format #04 00:14:27.828 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:27.828 LBA Format #01: Data Size: 512 Metadata Size: 8 00:14:27.828 LBA Format #02: Data Size: 512 Metadata Size: 16 00:14:27.828 LBA Format #03: Data Size: 512 Metadata Size: 64 00:14:27.828 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:14:27.828 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:14:27.828 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:14:27.828 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:14:27.828 00:14:27.828 Get Feature FDP: 00:14:27.828 ================ 00:14:27.828 Enabled: Yes 00:14:27.828 FDP configuration index: 0 00:14:27.828 00:14:27.828 FDP configurations log page 00:14:27.828 =========================== 00:14:27.828 Number of FDP configurations: 1 00:14:27.828 Version: 0 00:14:27.828 Size: 112 00:14:27.828 FDP Configuration Descriptor: 0 00:14:27.828 Descriptor Size: 96 00:14:27.828 Reclaim Group Identifier format: 2 00:14:27.828 FDP Volatile Write Cache: Not Present 00:14:27.828 FDP Configuration: Valid 00:14:27.828 Vendor Specific Size: 0 00:14:27.828 Number of Reclaim Groups: 2 00:14:27.828 Number of Recalim Unit Handles: 8 00:14:27.828 Max Placement Identifiers: 128 00:14:27.828 Number of Namespaces Suppprted: 256 00:14:27.828 Reclaim unit Nominal Size: 6000000 bytes 00:14:27.828 Estimated Reclaim Unit Time Limit: Not Reported 00:14:27.828 RUH Desc #000: RUH Type: Initially Isolated 00:14:27.828 RUH Desc #001: RUH Type: Initially Isolated 00:14:27.828 RUH Desc #002: RUH Type: Initially Isolated 00:14:27.828 RUH Desc #003: RUH Type: Initially Isolated 00:14:27.828 RUH Desc #004: RUH Type: Initially Isolated 00:14:27.828 RUH Desc #005: RUH Type: Initially Isolated 00:14:27.828 RUH Desc #006: RUH Type: Initially Isolated 00:14:27.828 RUH Desc #007: RUH Type: Initially Isolated 00:14:27.828 00:14:27.828 FDP reclaim unit handle usage log page 00:14:28.087 ====================================== 00:14:28.087 Number of Reclaim Unit Handles: 8 00:14:28.087 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:14:28.087 RUH Usage Desc #001: RUH Attributes: Unused 00:14:28.087 RUH Usage Desc #002: RUH Attributes: Unused 00:14:28.087 RUH Usage Desc #003: RUH Attributes: Unused 00:14:28.087 RUH Usage Desc #004: RUH Attributes: Unused 00:14:28.087 RUH Usage Desc #005: RUH Attributes: Unused 00:14:28.087 RUH Usage Desc #006: RUH Attributes: Unused 00:14:28.087 RUH Usage Desc #007: RUH Attributes: Unused 00:14:28.087 00:14:28.087 FDP statistics log page 00:14:28.087 ======================= 00:14:28.087 Host bytes with metadata written: 425435136 00:14:28.087 Media bytes with metadata written: 425480192 00:14:28.087 Media bytes erased: 0 00:14:28.087 00:14:28.087 FDP events log page 00:14:28.087 =================== 00:14:28.087 Number of FDP events: 0 00:14:28.087 00:14:28.087 NVM Specific Namespace Data 00:14:28.087 =========================== 00:14:28.087 Logical Block Storage Tag Mask: 0 00:14:28.087 Protection Information Capabilities: 00:14:28.087 16b Guard Protection Information Storage Tag Support: No 00:14:28.087 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:14:28.087 Storage Tag Check Read Support: No 00:14:28.087 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:28.087 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:28.087 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:28.087 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:28.087 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:28.087 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:28.087 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:28.087 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:28.087 00:14:28.087 real 0m2.310s 00:14:28.087 user 0m1.178s 00:14:28.087 sys 0m0.899s 00:14:28.087 10:05:58 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:28.087 10:05:58 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:14:28.087 ************************************ 00:14:28.087 END TEST nvme_identify 00:14:28.087 ************************************ 00:14:28.087 10:05:58 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:14:28.087 10:05:58 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:28.087 10:05:58 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:28.087 10:05:58 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:28.087 ************************************ 00:14:28.087 START TEST nvme_perf 00:14:28.087 ************************************ 00:14:28.087 10:05:58 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:14:28.087 10:05:58 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:14:29.465 Initializing NVMe Controllers 00:14:29.465 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:29.465 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:14:29.465 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:14:29.465 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:14:29.465 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:14:29.465 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:14:29.465 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:14:29.465 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:14:29.465 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:14:29.465 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:14:29.465 Initialization complete. Launching workers. 00:14:29.465 ======================================================== 00:14:29.465 Latency(us) 00:14:29.465 Device Information : IOPS MiB/s Average min max 00:14:29.465 PCIE (0000:00:10.0) NSID 1 from core 0: 12442.47 145.81 10310.22 7914.12 50790.69 00:14:29.465 PCIE (0000:00:11.0) NSID 1 from core 0: 12442.47 145.81 10284.76 8047.54 47598.08 00:14:29.465 PCIE (0000:00:13.0) NSID 1 from core 0: 12442.47 145.81 10250.76 8026.53 45202.36 00:14:29.465 PCIE (0000:00:12.0) NSID 1 from core 0: 12442.47 145.81 10215.24 7990.51 40848.89 00:14:29.465 PCIE (0000:00:12.0) NSID 2 from core 0: 12506.28 146.56 10135.82 7968.29 32158.58 00:14:29.465 PCIE (0000:00:12.0) NSID 3 from core 0: 12506.28 146.56 10107.88 7951.97 28981.34 00:14:29.465 ======================================================== 00:14:29.465 Total : 74782.46 876.36 10217.28 7914.12 50790.69 00:14:29.465 00:14:29.465 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:14:29.465 ================================================================================= 00:14:29.465 1.00000% : 8281.367us 00:14:29.465 10.00000% : 8817.571us 00:14:29.465 25.00000% : 9294.196us 00:14:29.465 50.00000% : 9889.978us 00:14:29.465 75.00000% : 10545.338us 00:14:29.465 90.00000% : 11081.542us 00:14:29.465 95.00000% : 11975.215us 00:14:29.465 98.00000% : 13285.935us 00:14:29.465 99.00000% : 40751.476us 00:14:29.465 99.50000% : 48139.171us 00:14:29.465 99.90000% : 50283.985us 00:14:29.465 99.99000% : 50760.611us 00:14:29.465 99.99900% : 50998.924us 00:14:29.465 99.99990% : 50998.924us 00:14:29.465 99.99999% : 50998.924us 00:14:29.465 00:14:29.465 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:14:29.465 ================================================================================= 00:14:29.465 1.00000% : 8340.945us 00:14:29.465 10.00000% : 8817.571us 00:14:29.465 25.00000% : 9294.196us 00:14:29.465 50.00000% : 9949.556us 00:14:29.465 75.00000% : 10485.760us 00:14:29.465 90.00000% : 11081.542us 00:14:29.465 95.00000% : 12094.371us 00:14:29.466 98.00000% : 13345.513us 00:14:29.466 99.00000% : 37891.724us 00:14:29.466 99.50000% : 45279.418us 00:14:29.466 99.90000% : 47185.920us 00:14:29.466 99.99000% : 47662.545us 00:14:29.466 99.99900% : 47662.545us 00:14:29.466 99.99990% : 47662.545us 00:14:29.466 99.99999% : 47662.545us 00:14:29.466 00:14:29.466 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:14:29.466 ================================================================================= 00:14:29.466 1.00000% : 8340.945us 00:14:29.466 10.00000% : 8817.571us 00:14:29.466 25.00000% : 9294.196us 00:14:29.466 50.00000% : 9949.556us 00:14:29.466 75.00000% : 10485.760us 00:14:29.466 90.00000% : 11081.542us 00:14:29.466 95.00000% : 12034.793us 00:14:29.466 98.00000% : 13285.935us 00:14:29.466 99.00000% : 34078.720us 00:14:29.466 99.50000% : 42657.978us 00:14:29.466 99.90000% : 44802.793us 00:14:29.466 99.99000% : 45279.418us 00:14:29.466 99.99900% : 45279.418us 00:14:29.466 99.99990% : 45279.418us 00:14:29.466 99.99999% : 45279.418us 00:14:29.466 00:14:29.466 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:14:29.466 ================================================================================= 00:14:29.466 1.00000% : 8340.945us 00:14:29.466 10.00000% : 8877.149us 00:14:29.466 25.00000% : 9294.196us 00:14:29.466 50.00000% : 9949.556us 00:14:29.466 75.00000% : 10485.760us 00:14:29.466 90.00000% : 11021.964us 00:14:29.466 95.00000% : 12034.793us 00:14:29.466 98.00000% : 13345.513us 00:14:29.466 99.00000% : 30980.655us 00:14:29.466 99.50000% : 38606.662us 00:14:29.466 99.90000% : 40513.164us 00:14:29.466 99.99000% : 40989.789us 00:14:29.466 99.99900% : 40989.789us 00:14:29.466 99.99990% : 40989.789us 00:14:29.466 99.99999% : 40989.789us 00:14:29.466 00:14:29.466 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:14:29.466 ================================================================================= 00:14:29.466 1.00000% : 8400.524us 00:14:29.466 10.00000% : 8877.149us 00:14:29.466 25.00000% : 9294.196us 00:14:29.466 50.00000% : 9949.556us 00:14:29.466 75.00000% : 10485.760us 00:14:29.466 90.00000% : 11081.542us 00:14:29.466 95.00000% : 12034.793us 00:14:29.466 98.00000% : 13464.669us 00:14:29.466 99.00000% : 22163.084us 00:14:29.466 99.50000% : 29669.935us 00:14:29.466 99.90000% : 31695.593us 00:14:29.466 99.99000% : 32172.218us 00:14:29.466 99.99900% : 32172.218us 00:14:29.466 99.99990% : 32172.218us 00:14:29.466 99.99999% : 32172.218us 00:14:29.466 00:14:29.466 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:14:29.466 ================================================================================= 00:14:29.466 1.00000% : 8340.945us 00:14:29.466 10.00000% : 8877.149us 00:14:29.466 25.00000% : 9353.775us 00:14:29.466 50.00000% : 9949.556us 00:14:29.466 75.00000% : 10485.760us 00:14:29.466 90.00000% : 11021.964us 00:14:29.466 95.00000% : 12094.371us 00:14:29.466 98.00000% : 13643.404us 00:14:29.466 99.00000% : 19065.018us 00:14:29.466 99.50000% : 26571.869us 00:14:29.466 99.90000% : 28597.527us 00:14:29.466 99.99000% : 28954.996us 00:14:29.466 99.99900% : 29074.153us 00:14:29.466 99.99990% : 29074.153us 00:14:29.466 99.99999% : 29074.153us 00:14:29.466 00:14:29.466 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:14:29.466 ============================================================================== 00:14:29.466 Range in us Cumulative IO count 00:14:29.466 7864.320 - 7923.898: 0.0080% ( 1) 00:14:29.466 7923.898 - 7983.476: 0.0321% ( 3) 00:14:29.466 7983.476 - 8043.055: 0.0801% ( 6) 00:14:29.466 8043.055 - 8102.633: 0.1763% ( 12) 00:14:29.466 8102.633 - 8162.211: 0.3686% ( 24) 00:14:29.466 8162.211 - 8221.789: 0.7292% ( 45) 00:14:29.466 8221.789 - 8281.367: 1.1939% ( 58) 00:14:29.466 8281.367 - 8340.945: 1.7468% ( 69) 00:14:29.466 8340.945 - 8400.524: 2.4599% ( 89) 00:14:29.466 8400.524 - 8460.102: 3.3494% ( 111) 00:14:29.466 8460.102 - 8519.680: 4.4231% ( 134) 00:14:29.466 8519.680 - 8579.258: 5.5208% ( 137) 00:14:29.466 8579.258 - 8638.836: 6.8750% ( 169) 00:14:29.466 8638.836 - 8698.415: 8.3734% ( 187) 00:14:29.466 8698.415 - 8757.993: 9.7356% ( 170) 00:14:29.466 8757.993 - 8817.571: 11.3622% ( 203) 00:14:29.466 8817.571 - 8877.149: 12.8446% ( 185) 00:14:29.466 8877.149 - 8936.727: 14.3910% ( 193) 00:14:29.466 8936.727 - 8996.305: 15.9696% ( 197) 00:14:29.466 8996.305 - 9055.884: 17.6843% ( 214) 00:14:29.466 9055.884 - 9115.462: 19.3349% ( 206) 00:14:29.466 9115.462 - 9175.040: 21.1859% ( 231) 00:14:29.466 9175.040 - 9234.618: 23.0929% ( 238) 00:14:29.466 9234.618 - 9294.196: 25.2244% ( 266) 00:14:29.466 9294.196 - 9353.775: 27.6282% ( 300) 00:14:29.466 9353.775 - 9413.353: 30.0561% ( 303) 00:14:29.466 9413.353 - 9472.931: 32.6603% ( 325) 00:14:29.466 9472.931 - 9532.509: 35.2003% ( 317) 00:14:29.466 9532.509 - 9592.087: 37.8125% ( 326) 00:14:29.466 9592.087 - 9651.665: 40.5048% ( 336) 00:14:29.466 9651.665 - 9711.244: 43.1891% ( 335) 00:14:29.466 9711.244 - 9770.822: 45.7212% ( 316) 00:14:29.466 9770.822 - 9830.400: 48.2051% ( 310) 00:14:29.466 9830.400 - 9889.978: 50.6330% ( 303) 00:14:29.466 9889.978 - 9949.556: 53.0369% ( 300) 00:14:29.466 9949.556 - 10009.135: 55.4167% ( 297) 00:14:29.466 10009.135 - 10068.713: 57.8446% ( 303) 00:14:29.466 10068.713 - 10128.291: 60.2804% ( 304) 00:14:29.466 10128.291 - 10187.869: 62.5721% ( 286) 00:14:29.466 10187.869 - 10247.447: 64.8878% ( 289) 00:14:29.466 10247.447 - 10307.025: 67.1554% ( 283) 00:14:29.466 10307.025 - 10366.604: 69.5272% ( 296) 00:14:29.466 10366.604 - 10426.182: 71.7228% ( 274) 00:14:29.466 10426.182 - 10485.760: 74.0545% ( 291) 00:14:29.466 10485.760 - 10545.338: 76.2981% ( 280) 00:14:29.466 10545.338 - 10604.916: 78.5978% ( 287) 00:14:29.466 10604.916 - 10664.495: 80.7212% ( 265) 00:14:29.466 10664.495 - 10724.073: 82.7724% ( 256) 00:14:29.466 10724.073 - 10783.651: 84.4631% ( 211) 00:14:29.466 10783.651 - 10843.229: 85.9776% ( 189) 00:14:29.466 10843.229 - 10902.807: 87.3077% ( 166) 00:14:29.466 10902.807 - 10962.385: 88.4375% ( 141) 00:14:29.466 10962.385 - 11021.964: 89.4231% ( 123) 00:14:29.466 11021.964 - 11081.542: 90.2644% ( 105) 00:14:29.466 11081.542 - 11141.120: 90.9054% ( 80) 00:14:29.466 11141.120 - 11200.698: 91.4744% ( 71) 00:14:29.466 11200.698 - 11260.276: 91.9151% ( 55) 00:14:29.466 11260.276 - 11319.855: 92.4199% ( 63) 00:14:29.466 11319.855 - 11379.433: 92.7564% ( 42) 00:14:29.466 11379.433 - 11439.011: 93.0689% ( 39) 00:14:29.466 11439.011 - 11498.589: 93.3734% ( 38) 00:14:29.466 11498.589 - 11558.167: 93.5737% ( 25) 00:14:29.466 11558.167 - 11617.745: 93.8462% ( 34) 00:14:29.466 11617.745 - 11677.324: 94.0946% ( 31) 00:14:29.466 11677.324 - 11736.902: 94.3750% ( 35) 00:14:29.466 11736.902 - 11796.480: 94.5833% ( 26) 00:14:29.466 11796.480 - 11856.058: 94.7436% ( 20) 00:14:29.466 11856.058 - 11915.636: 94.9439% ( 25) 00:14:29.466 11915.636 - 11975.215: 95.1202% ( 22) 00:14:29.466 11975.215 - 12034.793: 95.3045% ( 23) 00:14:29.466 12034.793 - 12094.371: 95.4808% ( 22) 00:14:29.466 12094.371 - 12153.949: 95.6410% ( 20) 00:14:29.466 12153.949 - 12213.527: 95.8013% ( 20) 00:14:29.466 12213.527 - 12273.105: 95.9535% ( 19) 00:14:29.466 12273.105 - 12332.684: 96.0817% ( 16) 00:14:29.466 12332.684 - 12392.262: 96.2580% ( 22) 00:14:29.466 12392.262 - 12451.840: 96.3942% ( 17) 00:14:29.466 12451.840 - 12511.418: 96.5304% ( 17) 00:14:29.466 12511.418 - 12570.996: 96.6667% ( 17) 00:14:29.466 12570.996 - 12630.575: 96.8029% ( 17) 00:14:29.466 12630.575 - 12690.153: 96.9231% ( 15) 00:14:29.466 12690.153 - 12749.731: 97.0673% ( 18) 00:14:29.466 12749.731 - 12809.309: 97.2356% ( 21) 00:14:29.466 12809.309 - 12868.887: 97.3478% ( 14) 00:14:29.466 12868.887 - 12928.465: 97.5080% ( 20) 00:14:29.466 12928.465 - 12988.044: 97.6282% ( 15) 00:14:29.466 12988.044 - 13047.622: 97.7244% ( 12) 00:14:29.466 13047.622 - 13107.200: 97.8446% ( 15) 00:14:29.466 13107.200 - 13166.778: 97.9407% ( 12) 00:14:29.466 13166.778 - 13226.356: 97.9968% ( 7) 00:14:29.466 13226.356 - 13285.935: 98.0689% ( 9) 00:14:29.466 13285.935 - 13345.513: 98.1170% ( 6) 00:14:29.466 13345.513 - 13405.091: 98.1731% ( 7) 00:14:29.466 13405.091 - 13464.669: 98.2212% ( 6) 00:14:29.466 13464.669 - 13524.247: 98.2772% ( 7) 00:14:29.466 13524.247 - 13583.825: 98.3333% ( 7) 00:14:29.466 13583.825 - 13643.404: 98.4054% ( 9) 00:14:29.466 13643.404 - 13702.982: 98.4535% ( 6) 00:14:29.466 13702.982 - 13762.560: 98.5016% ( 6) 00:14:29.466 13762.560 - 13822.138: 98.5657% ( 8) 00:14:29.466 13822.138 - 13881.716: 98.6218% ( 7) 00:14:29.466 13881.716 - 13941.295: 98.6779% ( 7) 00:14:29.466 13941.295 - 14000.873: 98.7179% ( 5) 00:14:29.466 14000.873 - 14060.451: 98.7500% ( 4) 00:14:29.466 14060.451 - 14120.029: 98.7981% ( 6) 00:14:29.466 14120.029 - 14179.607: 98.8301% ( 4) 00:14:29.466 14179.607 - 14239.185: 98.8542% ( 3) 00:14:29.466 14239.185 - 14298.764: 98.8862% ( 4) 00:14:29.466 14298.764 - 14358.342: 98.9022% ( 2) 00:14:29.466 14358.342 - 14417.920: 98.9263% ( 3) 00:14:29.466 14417.920 - 14477.498: 98.9423% ( 2) 00:14:29.466 14477.498 - 14537.076: 98.9583% ( 2) 00:14:29.466 14537.076 - 14596.655: 98.9663% ( 1) 00:14:29.466 14596.655 - 14656.233: 98.9744% ( 1) 00:14:29.466 40513.164 - 40751.476: 99.0064% ( 4) 00:14:29.466 40751.476 - 40989.789: 99.0545% ( 6) 00:14:29.466 40989.789 - 41228.102: 99.0865% ( 4) 00:14:29.466 41228.102 - 41466.415: 99.1346% ( 6) 00:14:29.466 41466.415 - 41704.727: 99.1827% ( 6) 00:14:29.466 41704.727 - 41943.040: 99.2228% ( 5) 00:14:29.466 41943.040 - 42181.353: 99.2628% ( 5) 00:14:29.466 42181.353 - 42419.665: 99.3109% ( 6) 00:14:29.466 42419.665 - 42657.978: 99.3510% ( 5) 00:14:29.466 42657.978 - 42896.291: 99.3990% ( 6) 00:14:29.466 42896.291 - 43134.604: 99.4471% ( 6) 00:14:29.467 43134.604 - 43372.916: 99.4872% ( 5) 00:14:29.467 47900.858 - 48139.171: 99.5112% ( 3) 00:14:29.467 48139.171 - 48377.484: 99.5513% ( 5) 00:14:29.467 48377.484 - 48615.796: 99.5994% ( 6) 00:14:29.467 48615.796 - 48854.109: 99.6394% ( 5) 00:14:29.467 48854.109 - 49092.422: 99.6875% ( 6) 00:14:29.467 49092.422 - 49330.735: 99.7356% ( 6) 00:14:29.467 49330.735 - 49569.047: 99.7756% ( 5) 00:14:29.467 49569.047 - 49807.360: 99.8157% ( 5) 00:14:29.467 49807.360 - 50045.673: 99.8558% ( 5) 00:14:29.467 50045.673 - 50283.985: 99.9119% ( 7) 00:14:29.467 50283.985 - 50522.298: 99.9519% ( 5) 00:14:29.467 50522.298 - 50760.611: 99.9920% ( 5) 00:14:29.467 50760.611 - 50998.924: 100.0000% ( 1) 00:14:29.467 00:14:29.467 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:14:29.467 ============================================================================== 00:14:29.467 Range in us Cumulative IO count 00:14:29.467 8043.055 - 8102.633: 0.0401% ( 5) 00:14:29.467 8102.633 - 8162.211: 0.1282% ( 11) 00:14:29.467 8162.211 - 8221.789: 0.2885% ( 20) 00:14:29.467 8221.789 - 8281.367: 0.5769% ( 36) 00:14:29.467 8281.367 - 8340.945: 1.0256% ( 56) 00:14:29.467 8340.945 - 8400.524: 1.6587% ( 79) 00:14:29.467 8400.524 - 8460.102: 2.3638% ( 88) 00:14:29.467 8460.102 - 8519.680: 3.3494% ( 123) 00:14:29.467 8519.680 - 8579.258: 4.4712% ( 140) 00:14:29.467 8579.258 - 8638.836: 5.7772% ( 163) 00:14:29.467 8638.836 - 8698.415: 7.1554% ( 172) 00:14:29.467 8698.415 - 8757.993: 8.7580% ( 200) 00:14:29.467 8757.993 - 8817.571: 10.4327% ( 209) 00:14:29.467 8817.571 - 8877.149: 12.2436% ( 226) 00:14:29.467 8877.149 - 8936.727: 14.0625% ( 227) 00:14:29.467 8936.727 - 8996.305: 15.9054% ( 230) 00:14:29.467 8996.305 - 9055.884: 17.7404% ( 229) 00:14:29.467 9055.884 - 9115.462: 19.7115% ( 246) 00:14:29.467 9115.462 - 9175.040: 21.6346% ( 240) 00:14:29.467 9175.040 - 9234.618: 23.5337% ( 237) 00:14:29.467 9234.618 - 9294.196: 25.4888% ( 244) 00:14:29.467 9294.196 - 9353.775: 27.4119% ( 240) 00:14:29.467 9353.775 - 9413.353: 29.6074% ( 274) 00:14:29.467 9413.353 - 9472.931: 31.9872% ( 297) 00:14:29.467 9472.931 - 9532.509: 34.2869% ( 287) 00:14:29.467 9532.509 - 9592.087: 36.6346% ( 293) 00:14:29.467 9592.087 - 9651.665: 39.1827% ( 318) 00:14:29.467 9651.665 - 9711.244: 41.7548% ( 321) 00:14:29.467 9711.244 - 9770.822: 44.2788% ( 315) 00:14:29.467 9770.822 - 9830.400: 46.8750% ( 324) 00:14:29.467 9830.400 - 9889.978: 49.4792% ( 325) 00:14:29.467 9889.978 - 9949.556: 52.0913% ( 326) 00:14:29.467 9949.556 - 10009.135: 54.7276% ( 329) 00:14:29.467 10009.135 - 10068.713: 57.3958% ( 333) 00:14:29.467 10068.713 - 10128.291: 60.0641% ( 333) 00:14:29.467 10128.291 - 10187.869: 62.8205% ( 344) 00:14:29.467 10187.869 - 10247.447: 65.4647% ( 330) 00:14:29.467 10247.447 - 10307.025: 68.1971% ( 341) 00:14:29.467 10307.025 - 10366.604: 70.8494% ( 331) 00:14:29.467 10366.604 - 10426.182: 73.4215% ( 321) 00:14:29.467 10426.182 - 10485.760: 75.9455% ( 315) 00:14:29.467 10485.760 - 10545.338: 78.2612% ( 289) 00:14:29.467 10545.338 - 10604.916: 80.5529% ( 286) 00:14:29.467 10604.916 - 10664.495: 82.5721% ( 252) 00:14:29.467 10664.495 - 10724.073: 84.3109% ( 217) 00:14:29.467 10724.073 - 10783.651: 85.9215% ( 201) 00:14:29.467 10783.651 - 10843.229: 87.2436% ( 165) 00:14:29.467 10843.229 - 10902.807: 88.3013% ( 132) 00:14:29.467 10902.807 - 10962.385: 89.2388% ( 117) 00:14:29.467 10962.385 - 11021.964: 89.9760% ( 92) 00:14:29.467 11021.964 - 11081.542: 90.6010% ( 78) 00:14:29.467 11081.542 - 11141.120: 91.1699% ( 71) 00:14:29.467 11141.120 - 11200.698: 91.6587% ( 61) 00:14:29.467 11200.698 - 11260.276: 92.0513% ( 49) 00:14:29.467 11260.276 - 11319.855: 92.4519% ( 50) 00:14:29.467 11319.855 - 11379.433: 92.8045% ( 44) 00:14:29.467 11379.433 - 11439.011: 93.0929% ( 36) 00:14:29.467 11439.011 - 11498.589: 93.2933% ( 25) 00:14:29.467 11498.589 - 11558.167: 93.4776% ( 23) 00:14:29.467 11558.167 - 11617.745: 93.6619% ( 23) 00:14:29.467 11617.745 - 11677.324: 93.8542% ( 24) 00:14:29.467 11677.324 - 11736.902: 94.0064% ( 19) 00:14:29.467 11736.902 - 11796.480: 94.1667% ( 20) 00:14:29.467 11796.480 - 11856.058: 94.3590% ( 24) 00:14:29.467 11856.058 - 11915.636: 94.5433% ( 23) 00:14:29.467 11915.636 - 11975.215: 94.7035% ( 20) 00:14:29.467 11975.215 - 12034.793: 94.8958% ( 24) 00:14:29.467 12034.793 - 12094.371: 95.0801% ( 23) 00:14:29.467 12094.371 - 12153.949: 95.2484% ( 21) 00:14:29.467 12153.949 - 12213.527: 95.4087% ( 20) 00:14:29.467 12213.527 - 12273.105: 95.6010% ( 24) 00:14:29.467 12273.105 - 12332.684: 95.7612% ( 20) 00:14:29.467 12332.684 - 12392.262: 95.9135% ( 19) 00:14:29.467 12392.262 - 12451.840: 96.0497% ( 17) 00:14:29.467 12451.840 - 12511.418: 96.1859% ( 17) 00:14:29.467 12511.418 - 12570.996: 96.3542% ( 21) 00:14:29.467 12570.996 - 12630.575: 96.5144% ( 20) 00:14:29.467 12630.575 - 12690.153: 96.6747% ( 20) 00:14:29.467 12690.153 - 12749.731: 96.8349% ( 20) 00:14:29.467 12749.731 - 12809.309: 96.9952% ( 20) 00:14:29.467 12809.309 - 12868.887: 97.1394% ( 18) 00:14:29.467 12868.887 - 12928.465: 97.2756% ( 17) 00:14:29.467 12928.465 - 12988.044: 97.4359% ( 20) 00:14:29.467 12988.044 - 13047.622: 97.5881% ( 19) 00:14:29.467 13047.622 - 13107.200: 97.7003% ( 14) 00:14:29.467 13107.200 - 13166.778: 97.7965% ( 12) 00:14:29.467 13166.778 - 13226.356: 97.8846% ( 11) 00:14:29.467 13226.356 - 13285.935: 97.9407% ( 7) 00:14:29.467 13285.935 - 13345.513: 98.0048% ( 8) 00:14:29.467 13345.513 - 13405.091: 98.0929% ( 11) 00:14:29.467 13405.091 - 13464.669: 98.1571% ( 8) 00:14:29.467 13464.669 - 13524.247: 98.2212% ( 8) 00:14:29.467 13524.247 - 13583.825: 98.2853% ( 8) 00:14:29.467 13583.825 - 13643.404: 98.3413% ( 7) 00:14:29.467 13643.404 - 13702.982: 98.4135% ( 9) 00:14:29.467 13702.982 - 13762.560: 98.4776% ( 8) 00:14:29.467 13762.560 - 13822.138: 98.5497% ( 9) 00:14:29.467 13822.138 - 13881.716: 98.6138% ( 8) 00:14:29.467 13881.716 - 13941.295: 98.6779% ( 8) 00:14:29.467 13941.295 - 14000.873: 98.7260% ( 6) 00:14:29.467 14000.873 - 14060.451: 98.7981% ( 9) 00:14:29.467 14060.451 - 14120.029: 98.8622% ( 8) 00:14:29.467 14120.029 - 14179.607: 98.9263% ( 8) 00:14:29.467 14179.607 - 14239.185: 98.9503% ( 3) 00:14:29.467 14239.185 - 14298.764: 98.9663% ( 2) 00:14:29.467 14298.764 - 14358.342: 98.9744% ( 1) 00:14:29.467 37415.098 - 37653.411: 98.9984% ( 3) 00:14:29.467 37653.411 - 37891.724: 99.0385% ( 5) 00:14:29.467 37891.724 - 38130.036: 99.0785% ( 5) 00:14:29.467 38130.036 - 38368.349: 99.1266% ( 6) 00:14:29.467 38368.349 - 38606.662: 99.1747% ( 6) 00:14:29.467 38606.662 - 38844.975: 99.2228% ( 6) 00:14:29.467 38844.975 - 39083.287: 99.2708% ( 6) 00:14:29.467 39083.287 - 39321.600: 99.3109% ( 5) 00:14:29.467 39321.600 - 39559.913: 99.3590% ( 6) 00:14:29.467 39559.913 - 39798.225: 99.4071% ( 6) 00:14:29.467 39798.225 - 40036.538: 99.4551% ( 6) 00:14:29.467 40036.538 - 40274.851: 99.4872% ( 4) 00:14:29.467 45041.105 - 45279.418: 99.5353% ( 6) 00:14:29.467 45279.418 - 45517.731: 99.5753% ( 5) 00:14:29.467 45517.731 - 45756.044: 99.6234% ( 6) 00:14:29.467 45756.044 - 45994.356: 99.6715% ( 6) 00:14:29.467 45994.356 - 46232.669: 99.7196% ( 6) 00:14:29.467 46232.669 - 46470.982: 99.7676% ( 6) 00:14:29.467 46470.982 - 46709.295: 99.8157% ( 6) 00:14:29.467 46709.295 - 46947.607: 99.8638% ( 6) 00:14:29.467 46947.607 - 47185.920: 99.9119% ( 6) 00:14:29.467 47185.920 - 47424.233: 99.9599% ( 6) 00:14:29.467 47424.233 - 47662.545: 100.0000% ( 5) 00:14:29.467 00:14:29.467 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:14:29.467 ============================================================================== 00:14:29.467 Range in us Cumulative IO count 00:14:29.467 7983.476 - 8043.055: 0.0080% ( 1) 00:14:29.467 8043.055 - 8102.633: 0.0721% ( 8) 00:14:29.467 8102.633 - 8162.211: 0.1763% ( 13) 00:14:29.467 8162.211 - 8221.789: 0.3846% ( 26) 00:14:29.467 8221.789 - 8281.367: 0.6490% ( 33) 00:14:29.467 8281.367 - 8340.945: 1.0497% ( 50) 00:14:29.467 8340.945 - 8400.524: 1.6106% ( 70) 00:14:29.467 8400.524 - 8460.102: 2.3478% ( 92) 00:14:29.467 8460.102 - 8519.680: 3.3333% ( 123) 00:14:29.467 8519.680 - 8579.258: 4.4471% ( 139) 00:14:29.467 8579.258 - 8638.836: 5.6811% ( 154) 00:14:29.467 8638.836 - 8698.415: 7.0913% ( 176) 00:14:29.467 8698.415 - 8757.993: 8.6058% ( 189) 00:14:29.467 8757.993 - 8817.571: 10.2724% ( 208) 00:14:29.467 8817.571 - 8877.149: 12.0513% ( 222) 00:14:29.467 8877.149 - 8936.727: 13.8702% ( 227) 00:14:29.467 8936.727 - 8996.305: 15.7292% ( 232) 00:14:29.467 8996.305 - 9055.884: 17.6442% ( 239) 00:14:29.467 9055.884 - 9115.462: 19.6074% ( 245) 00:14:29.467 9115.462 - 9175.040: 21.5705% ( 245) 00:14:29.467 9175.040 - 9234.618: 23.5176% ( 243) 00:14:29.467 9234.618 - 9294.196: 25.5128% ( 249) 00:14:29.467 9294.196 - 9353.775: 27.4920% ( 247) 00:14:29.467 9353.775 - 9413.353: 29.7917% ( 287) 00:14:29.467 9413.353 - 9472.931: 31.9872% ( 274) 00:14:29.467 9472.931 - 9532.509: 34.2308% ( 280) 00:14:29.467 9532.509 - 9592.087: 36.6747% ( 305) 00:14:29.467 9592.087 - 9651.665: 39.1186% ( 305) 00:14:29.467 9651.665 - 9711.244: 41.7708% ( 331) 00:14:29.467 9711.244 - 9770.822: 44.3830% ( 326) 00:14:29.467 9770.822 - 9830.400: 46.9872% ( 325) 00:14:29.467 9830.400 - 9889.978: 49.5913% ( 325) 00:14:29.467 9889.978 - 9949.556: 52.1554% ( 320) 00:14:29.467 9949.556 - 10009.135: 54.7837% ( 328) 00:14:29.467 10009.135 - 10068.713: 57.4439% ( 332) 00:14:29.467 10068.713 - 10128.291: 60.0401% ( 324) 00:14:29.467 10128.291 - 10187.869: 62.7644% ( 340) 00:14:29.467 10187.869 - 10247.447: 65.4327% ( 333) 00:14:29.467 10247.447 - 10307.025: 68.0369% ( 325) 00:14:29.467 10307.025 - 10366.604: 70.5609% ( 315) 00:14:29.467 10366.604 - 10426.182: 73.1410% ( 322) 00:14:29.467 10426.182 - 10485.760: 75.7532% ( 326) 00:14:29.467 10485.760 - 10545.338: 78.1811% ( 303) 00:14:29.467 10545.338 - 10604.916: 80.5288% ( 293) 00:14:29.468 10604.916 - 10664.495: 82.5801% ( 256) 00:14:29.468 10664.495 - 10724.073: 84.3269% ( 218) 00:14:29.468 10724.073 - 10783.651: 85.8974% ( 196) 00:14:29.468 10783.651 - 10843.229: 87.1795% ( 160) 00:14:29.468 10843.229 - 10902.807: 88.2452% ( 133) 00:14:29.468 10902.807 - 10962.385: 89.2147% ( 121) 00:14:29.468 10962.385 - 11021.964: 89.9920% ( 97) 00:14:29.468 11021.964 - 11081.542: 90.7051% ( 89) 00:14:29.468 11081.542 - 11141.120: 91.2340% ( 66) 00:14:29.468 11141.120 - 11200.698: 91.7388% ( 63) 00:14:29.468 11200.698 - 11260.276: 92.1715% ( 54) 00:14:29.468 11260.276 - 11319.855: 92.5401% ( 46) 00:14:29.468 11319.855 - 11379.433: 92.8846% ( 43) 00:14:29.468 11379.433 - 11439.011: 93.1330% ( 31) 00:14:29.468 11439.011 - 11498.589: 93.3333% ( 25) 00:14:29.468 11498.589 - 11558.167: 93.5417% ( 26) 00:14:29.468 11558.167 - 11617.745: 93.7500% ( 26) 00:14:29.468 11617.745 - 11677.324: 93.9503% ( 25) 00:14:29.468 11677.324 - 11736.902: 94.1587% ( 26) 00:14:29.468 11736.902 - 11796.480: 94.4071% ( 31) 00:14:29.468 11796.480 - 11856.058: 94.6474% ( 30) 00:14:29.468 11856.058 - 11915.636: 94.8077% ( 20) 00:14:29.468 11915.636 - 11975.215: 94.9679% ( 20) 00:14:29.468 11975.215 - 12034.793: 95.1362% ( 21) 00:14:29.468 12034.793 - 12094.371: 95.2804% ( 18) 00:14:29.468 12094.371 - 12153.949: 95.4407% ( 20) 00:14:29.468 12153.949 - 12213.527: 95.5849% ( 18) 00:14:29.468 12213.527 - 12273.105: 95.7212% ( 17) 00:14:29.468 12273.105 - 12332.684: 95.8574% ( 17) 00:14:29.468 12332.684 - 12392.262: 96.0096% ( 19) 00:14:29.468 12392.262 - 12451.840: 96.1699% ( 20) 00:14:29.468 12451.840 - 12511.418: 96.3221% ( 19) 00:14:29.468 12511.418 - 12570.996: 96.4984% ( 22) 00:14:29.468 12570.996 - 12630.575: 96.6907% ( 24) 00:14:29.468 12630.575 - 12690.153: 96.8670% ( 22) 00:14:29.468 12690.153 - 12749.731: 96.9872% ( 15) 00:14:29.468 12749.731 - 12809.309: 97.1554% ( 21) 00:14:29.468 12809.309 - 12868.887: 97.3077% ( 19) 00:14:29.468 12868.887 - 12928.465: 97.4599% ( 19) 00:14:29.468 12928.465 - 12988.044: 97.5801% ( 15) 00:14:29.468 12988.044 - 13047.622: 97.6923% ( 14) 00:14:29.468 13047.622 - 13107.200: 97.7724% ( 10) 00:14:29.468 13107.200 - 13166.778: 97.8606% ( 11) 00:14:29.468 13166.778 - 13226.356: 97.9487% ( 11) 00:14:29.468 13226.356 - 13285.935: 98.0369% ( 11) 00:14:29.468 13285.935 - 13345.513: 98.1250% ( 11) 00:14:29.468 13345.513 - 13405.091: 98.2131% ( 11) 00:14:29.468 13405.091 - 13464.669: 98.3013% ( 11) 00:14:29.468 13464.669 - 13524.247: 98.3654% ( 8) 00:14:29.468 13524.247 - 13583.825: 98.4375% ( 9) 00:14:29.468 13583.825 - 13643.404: 98.4936% ( 7) 00:14:29.468 13643.404 - 13702.982: 98.5577% ( 8) 00:14:29.468 13702.982 - 13762.560: 98.6218% ( 8) 00:14:29.468 13762.560 - 13822.138: 98.6779% ( 7) 00:14:29.468 13822.138 - 13881.716: 98.7260% ( 6) 00:14:29.468 13881.716 - 13941.295: 98.7740% ( 6) 00:14:29.468 13941.295 - 14000.873: 98.8141% ( 5) 00:14:29.468 14000.873 - 14060.451: 98.8542% ( 5) 00:14:29.468 14060.451 - 14120.029: 98.8942% ( 5) 00:14:29.468 14120.029 - 14179.607: 98.9263% ( 4) 00:14:29.468 14179.607 - 14239.185: 98.9503% ( 3) 00:14:29.468 14239.185 - 14298.764: 98.9744% ( 3) 00:14:29.468 33602.095 - 33840.407: 98.9984% ( 3) 00:14:29.468 33840.407 - 34078.720: 99.0465% ( 6) 00:14:29.468 34078.720 - 34317.033: 99.0946% ( 6) 00:14:29.468 34317.033 - 34555.345: 99.1426% ( 6) 00:14:29.468 34555.345 - 34793.658: 99.1827% ( 5) 00:14:29.468 34793.658 - 35031.971: 99.2308% ( 6) 00:14:29.468 35031.971 - 35270.284: 99.2788% ( 6) 00:14:29.468 35270.284 - 35508.596: 99.3269% ( 6) 00:14:29.468 35508.596 - 35746.909: 99.3750% ( 6) 00:14:29.468 35746.909 - 35985.222: 99.3910% ( 2) 00:14:29.468 37176.785 - 37415.098: 99.4391% ( 6) 00:14:29.468 37415.098 - 37653.411: 99.4872% ( 6) 00:14:29.468 42419.665 - 42657.978: 99.5032% ( 2) 00:14:29.468 42657.978 - 42896.291: 99.5353% ( 4) 00:14:29.468 42896.291 - 43134.604: 99.5833% ( 6) 00:14:29.468 43134.604 - 43372.916: 99.6234% ( 5) 00:14:29.468 43372.916 - 43611.229: 99.6715% ( 6) 00:14:29.468 43611.229 - 43849.542: 99.7276% ( 7) 00:14:29.468 43849.542 - 44087.855: 99.7756% ( 6) 00:14:29.468 44087.855 - 44326.167: 99.8237% ( 6) 00:14:29.468 44326.167 - 44564.480: 99.8638% ( 5) 00:14:29.468 44564.480 - 44802.793: 99.9119% ( 6) 00:14:29.468 44802.793 - 45041.105: 99.9599% ( 6) 00:14:29.468 45041.105 - 45279.418: 100.0000% ( 5) 00:14:29.468 00:14:29.468 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:14:29.468 ============================================================================== 00:14:29.468 Range in us Cumulative IO count 00:14:29.468 7983.476 - 8043.055: 0.0401% ( 5) 00:14:29.468 8043.055 - 8102.633: 0.0962% ( 7) 00:14:29.468 8102.633 - 8162.211: 0.1843% ( 11) 00:14:29.468 8162.211 - 8221.789: 0.3446% ( 20) 00:14:29.468 8221.789 - 8281.367: 0.5529% ( 26) 00:14:29.468 8281.367 - 8340.945: 1.0096% ( 57) 00:14:29.468 8340.945 - 8400.524: 1.5224% ( 64) 00:14:29.468 8400.524 - 8460.102: 2.2196% ( 87) 00:14:29.468 8460.102 - 8519.680: 3.1731% ( 119) 00:14:29.468 8519.680 - 8579.258: 4.2147% ( 130) 00:14:29.468 8579.258 - 8638.836: 5.4006% ( 148) 00:14:29.468 8638.836 - 8698.415: 6.7228% ( 165) 00:14:29.468 8698.415 - 8757.993: 8.2051% ( 185) 00:14:29.468 8757.993 - 8817.571: 9.9359% ( 216) 00:14:29.468 8817.571 - 8877.149: 11.7468% ( 226) 00:14:29.468 8877.149 - 8936.727: 13.6298% ( 235) 00:14:29.468 8936.727 - 8996.305: 15.4968% ( 233) 00:14:29.468 8996.305 - 9055.884: 17.4038% ( 238) 00:14:29.468 9055.884 - 9115.462: 19.3189% ( 239) 00:14:29.468 9115.462 - 9175.040: 21.2901% ( 246) 00:14:29.468 9175.040 - 9234.618: 23.2212% ( 241) 00:14:29.468 9234.618 - 9294.196: 25.2484% ( 253) 00:14:29.468 9294.196 - 9353.775: 27.2115% ( 245) 00:14:29.468 9353.775 - 9413.353: 29.3830% ( 271) 00:14:29.468 9413.353 - 9472.931: 31.6667% ( 285) 00:14:29.468 9472.931 - 9532.509: 33.9103% ( 280) 00:14:29.468 9532.509 - 9592.087: 36.3702% ( 307) 00:14:29.468 9592.087 - 9651.665: 39.0064% ( 329) 00:14:29.468 9651.665 - 9711.244: 41.6907% ( 335) 00:14:29.468 9711.244 - 9770.822: 44.2788% ( 323) 00:14:29.468 9770.822 - 9830.400: 46.8590% ( 322) 00:14:29.468 9830.400 - 9889.978: 49.3910% ( 316) 00:14:29.468 9889.978 - 9949.556: 51.9712% ( 322) 00:14:29.468 9949.556 - 10009.135: 54.6635% ( 336) 00:14:29.468 10009.135 - 10068.713: 57.3397% ( 334) 00:14:29.468 10068.713 - 10128.291: 60.0481% ( 338) 00:14:29.468 10128.291 - 10187.869: 62.6122% ( 320) 00:14:29.468 10187.869 - 10247.447: 65.3045% ( 336) 00:14:29.468 10247.447 - 10307.025: 67.8846% ( 322) 00:14:29.468 10307.025 - 10366.604: 70.5288% ( 330) 00:14:29.468 10366.604 - 10426.182: 73.1490% ( 327) 00:14:29.468 10426.182 - 10485.760: 75.7452% ( 324) 00:14:29.468 10485.760 - 10545.338: 78.2612% ( 314) 00:14:29.468 10545.338 - 10604.916: 80.6090% ( 293) 00:14:29.468 10604.916 - 10664.495: 82.7244% ( 264) 00:14:29.468 10664.495 - 10724.073: 84.5994% ( 234) 00:14:29.468 10724.073 - 10783.651: 86.1699% ( 196) 00:14:29.468 10783.651 - 10843.229: 87.4119% ( 155) 00:14:29.468 10843.229 - 10902.807: 88.4535% ( 130) 00:14:29.468 10902.807 - 10962.385: 89.4311% ( 122) 00:14:29.468 10962.385 - 11021.964: 90.1923% ( 95) 00:14:29.468 11021.964 - 11081.542: 90.9455% ( 94) 00:14:29.468 11081.542 - 11141.120: 91.5545% ( 76) 00:14:29.468 11141.120 - 11200.698: 92.0593% ( 63) 00:14:29.468 11200.698 - 11260.276: 92.4840% ( 53) 00:14:29.468 11260.276 - 11319.855: 92.7965% ( 39) 00:14:29.468 11319.855 - 11379.433: 93.1571% ( 45) 00:14:29.468 11379.433 - 11439.011: 93.4615% ( 38) 00:14:29.468 11439.011 - 11498.589: 93.6859% ( 28) 00:14:29.468 11498.589 - 11558.167: 93.8221% ( 17) 00:14:29.468 11558.167 - 11617.745: 93.9503% ( 16) 00:14:29.468 11617.745 - 11677.324: 94.0865% ( 17) 00:14:29.468 11677.324 - 11736.902: 94.2228% ( 17) 00:14:29.468 11736.902 - 11796.480: 94.3590% ( 17) 00:14:29.468 11796.480 - 11856.058: 94.5513% ( 24) 00:14:29.468 11856.058 - 11915.636: 94.7196% ( 21) 00:14:29.468 11915.636 - 11975.215: 94.8958% ( 22) 00:14:29.468 11975.215 - 12034.793: 95.0641% ( 21) 00:14:29.468 12034.793 - 12094.371: 95.2244% ( 20) 00:14:29.468 12094.371 - 12153.949: 95.3846% ( 20) 00:14:29.468 12153.949 - 12213.527: 95.5609% ( 22) 00:14:29.468 12213.527 - 12273.105: 95.7372% ( 22) 00:14:29.468 12273.105 - 12332.684: 95.9054% ( 21) 00:14:29.468 12332.684 - 12392.262: 96.0817% ( 22) 00:14:29.468 12392.262 - 12451.840: 96.2179% ( 17) 00:14:29.468 12451.840 - 12511.418: 96.3702% ( 19) 00:14:29.468 12511.418 - 12570.996: 96.5385% ( 21) 00:14:29.468 12570.996 - 12630.575: 96.7067% ( 21) 00:14:29.468 12630.575 - 12690.153: 96.8750% ( 21) 00:14:29.468 12690.153 - 12749.731: 97.0032% ( 16) 00:14:29.468 12749.731 - 12809.309: 97.1234% ( 15) 00:14:29.468 12809.309 - 12868.887: 97.2756% ( 19) 00:14:29.468 12868.887 - 12928.465: 97.4119% ( 17) 00:14:29.468 12928.465 - 12988.044: 97.5321% ( 15) 00:14:29.468 12988.044 - 13047.622: 97.6282% ( 12) 00:14:29.468 13047.622 - 13107.200: 97.6923% ( 8) 00:14:29.468 13107.200 - 13166.778: 97.7724% ( 10) 00:14:29.468 13166.778 - 13226.356: 97.8526% ( 10) 00:14:29.468 13226.356 - 13285.935: 97.9327% ( 10) 00:14:29.468 13285.935 - 13345.513: 98.0208% ( 11) 00:14:29.468 13345.513 - 13405.091: 98.1170% ( 12) 00:14:29.468 13405.091 - 13464.669: 98.1891% ( 9) 00:14:29.468 13464.669 - 13524.247: 98.2532% ( 8) 00:14:29.468 13524.247 - 13583.825: 98.3173% ( 8) 00:14:29.468 13583.825 - 13643.404: 98.3814% ( 8) 00:14:29.468 13643.404 - 13702.982: 98.4375% ( 7) 00:14:29.468 13702.982 - 13762.560: 98.4776% ( 5) 00:14:29.468 13762.560 - 13822.138: 98.5176% ( 5) 00:14:29.468 13822.138 - 13881.716: 98.5657% ( 6) 00:14:29.468 13881.716 - 13941.295: 98.6058% ( 5) 00:14:29.468 13941.295 - 14000.873: 98.6458% ( 5) 00:14:29.468 14000.873 - 14060.451: 98.6859% ( 5) 00:14:29.468 14060.451 - 14120.029: 98.7260% ( 5) 00:14:29.468 14120.029 - 14179.607: 98.7740% ( 6) 00:14:29.468 14179.607 - 14239.185: 98.8221% ( 6) 00:14:29.468 14239.185 - 14298.764: 98.8622% ( 5) 00:14:29.469 14298.764 - 14358.342: 98.8942% ( 4) 00:14:29.469 14358.342 - 14417.920: 98.9423% ( 6) 00:14:29.469 14417.920 - 14477.498: 98.9583% ( 2) 00:14:29.469 14477.498 - 14537.076: 98.9744% ( 2) 00:14:29.469 30504.029 - 30742.342: 98.9904% ( 2) 00:14:29.469 30742.342 - 30980.655: 99.0304% ( 5) 00:14:29.469 30980.655 - 31218.967: 99.0785% ( 6) 00:14:29.469 31218.967 - 31457.280: 99.1186% ( 5) 00:14:29.469 31457.280 - 31695.593: 99.1667% ( 6) 00:14:29.469 31695.593 - 31933.905: 99.2228% ( 7) 00:14:29.469 31933.905 - 32172.218: 99.2708% ( 6) 00:14:29.469 32172.218 - 32410.531: 99.3189% ( 6) 00:14:29.469 32410.531 - 32648.844: 99.3590% ( 5) 00:14:29.469 32648.844 - 32887.156: 99.4071% ( 6) 00:14:29.469 32887.156 - 33125.469: 99.4551% ( 6) 00:14:29.469 33125.469 - 33363.782: 99.4872% ( 4) 00:14:29.469 38130.036 - 38368.349: 99.4952% ( 1) 00:14:29.469 38368.349 - 38606.662: 99.5433% ( 6) 00:14:29.469 38606.662 - 38844.975: 99.5913% ( 6) 00:14:29.469 38844.975 - 39083.287: 99.6314% ( 5) 00:14:29.469 39083.287 - 39321.600: 99.6795% ( 6) 00:14:29.469 39321.600 - 39559.913: 99.7356% ( 7) 00:14:29.469 39559.913 - 39798.225: 99.7837% ( 6) 00:14:29.469 39798.225 - 40036.538: 99.8317% ( 6) 00:14:29.469 40036.538 - 40274.851: 99.8798% ( 6) 00:14:29.469 40274.851 - 40513.164: 99.9279% ( 6) 00:14:29.469 40513.164 - 40751.476: 99.9760% ( 6) 00:14:29.469 40751.476 - 40989.789: 100.0000% ( 3) 00:14:29.469 00:14:29.469 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:14:29.469 ============================================================================== 00:14:29.469 Range in us Cumulative IO count 00:14:29.469 7923.898 - 7983.476: 0.0080% ( 1) 00:14:29.469 7983.476 - 8043.055: 0.0399% ( 4) 00:14:29.469 8043.055 - 8102.633: 0.0877% ( 6) 00:14:29.469 8102.633 - 8162.211: 0.1754% ( 11) 00:14:29.469 8162.211 - 8221.789: 0.2950% ( 15) 00:14:29.469 8221.789 - 8281.367: 0.5501% ( 32) 00:14:29.469 8281.367 - 8340.945: 0.9726% ( 53) 00:14:29.469 8340.945 - 8400.524: 1.6103% ( 80) 00:14:29.469 8400.524 - 8460.102: 2.2879% ( 85) 00:14:29.469 8460.102 - 8519.680: 3.1808% ( 112) 00:14:29.469 8519.680 - 8579.258: 4.1614% ( 123) 00:14:29.469 8579.258 - 8638.836: 5.3651% ( 151) 00:14:29.469 8638.836 - 8698.415: 6.7761% ( 177) 00:14:29.469 8698.415 - 8757.993: 8.2350% ( 183) 00:14:29.469 8757.993 - 8817.571: 9.8613% ( 204) 00:14:29.469 8817.571 - 8877.149: 11.6869% ( 229) 00:14:29.469 8877.149 - 8936.727: 13.4965% ( 227) 00:14:29.469 8936.727 - 8996.305: 15.3779% ( 236) 00:14:29.469 8996.305 - 9055.884: 17.2752% ( 238) 00:14:29.469 9055.884 - 9115.462: 19.1645% ( 237) 00:14:29.469 9115.462 - 9175.040: 21.0619% ( 238) 00:14:29.469 9175.040 - 9234.618: 23.0548% ( 250) 00:14:29.469 9234.618 - 9294.196: 25.0717% ( 253) 00:14:29.469 9294.196 - 9353.775: 27.0089% ( 243) 00:14:29.469 9353.775 - 9413.353: 29.0258% ( 253) 00:14:29.469 9413.353 - 9472.931: 31.3138% ( 287) 00:14:29.469 9472.931 - 9532.509: 33.7372% ( 304) 00:14:29.469 9532.509 - 9592.087: 36.1368% ( 301) 00:14:29.469 9592.087 - 9651.665: 38.6480% ( 315) 00:14:29.469 9651.665 - 9711.244: 41.2628% ( 328) 00:14:29.469 9711.244 - 9770.822: 43.7899% ( 317) 00:14:29.469 9770.822 - 9830.400: 46.3568% ( 322) 00:14:29.469 9830.400 - 9889.978: 48.9238% ( 322) 00:14:29.469 9889.978 - 9949.556: 51.6263% ( 339) 00:14:29.469 9949.556 - 10009.135: 54.2092% ( 324) 00:14:29.469 10009.135 - 10068.713: 56.8878% ( 336) 00:14:29.469 10068.713 - 10128.291: 59.4707% ( 324) 00:14:29.469 10128.291 - 10187.869: 62.1173% ( 332) 00:14:29.469 10187.869 - 10247.447: 64.7720% ( 333) 00:14:29.469 10247.447 - 10307.025: 67.3390% ( 322) 00:14:29.469 10307.025 - 10366.604: 70.0733% ( 343) 00:14:29.469 10366.604 - 10426.182: 72.7360% ( 334) 00:14:29.469 10426.182 - 10485.760: 75.3587% ( 329) 00:14:29.469 10485.760 - 10545.338: 77.8699% ( 315) 00:14:29.469 10545.338 - 10604.916: 80.3173% ( 307) 00:14:29.469 10604.916 - 10664.495: 82.4378% ( 266) 00:14:29.469 10664.495 - 10724.073: 84.2793% ( 231) 00:14:29.469 10724.073 - 10783.651: 85.8737% ( 200) 00:14:29.469 10783.651 - 10843.229: 87.2449% ( 172) 00:14:29.469 10843.229 - 10902.807: 88.3450% ( 138) 00:14:29.469 10902.807 - 10962.385: 89.1980% ( 107) 00:14:29.469 10962.385 - 11021.964: 89.9872% ( 99) 00:14:29.469 11021.964 - 11081.542: 90.6808% ( 87) 00:14:29.469 11081.542 - 11141.120: 91.2787% ( 75) 00:14:29.469 11141.120 - 11200.698: 91.7650% ( 61) 00:14:29.469 11200.698 - 11260.276: 92.2274% ( 58) 00:14:29.469 11260.276 - 11319.855: 92.6100% ( 48) 00:14:29.469 11319.855 - 11379.433: 92.9528% ( 43) 00:14:29.469 11379.433 - 11439.011: 93.2717% ( 40) 00:14:29.469 11439.011 - 11498.589: 93.4630% ( 24) 00:14:29.469 11498.589 - 11558.167: 93.6543% ( 24) 00:14:29.469 11558.167 - 11617.745: 93.8058% ( 19) 00:14:29.469 11617.745 - 11677.324: 93.9652% ( 20) 00:14:29.469 11677.324 - 11736.902: 94.1406% ( 22) 00:14:29.469 11736.902 - 11796.480: 94.3001% ( 20) 00:14:29.469 11796.480 - 11856.058: 94.4914% ( 24) 00:14:29.469 11856.058 - 11915.636: 94.7146% ( 28) 00:14:29.469 11915.636 - 11975.215: 94.8980% ( 23) 00:14:29.469 11975.215 - 12034.793: 95.0973% ( 25) 00:14:29.469 12034.793 - 12094.371: 95.2248% ( 16) 00:14:29.469 12094.371 - 12153.949: 95.3842% ( 20) 00:14:29.469 12153.949 - 12213.527: 95.5357% ( 19) 00:14:29.469 12213.527 - 12273.105: 95.6952% ( 20) 00:14:29.469 12273.105 - 12332.684: 95.8466% ( 19) 00:14:29.469 12332.684 - 12392.262: 96.0061% ( 20) 00:14:29.469 12392.262 - 12451.840: 96.1735% ( 21) 00:14:29.469 12451.840 - 12511.418: 96.3329% ( 20) 00:14:29.469 12511.418 - 12570.996: 96.4923% ( 20) 00:14:29.469 12570.996 - 12630.575: 96.6438% ( 19) 00:14:29.469 12630.575 - 12690.153: 96.8033% ( 20) 00:14:29.469 12690.153 - 12749.731: 96.9707% ( 21) 00:14:29.469 12749.731 - 12809.309: 97.1221% ( 19) 00:14:29.469 12809.309 - 12868.887: 97.2577% ( 17) 00:14:29.469 12868.887 - 12928.465: 97.4011% ( 18) 00:14:29.469 12928.465 - 12988.044: 97.5207% ( 15) 00:14:29.469 12988.044 - 13047.622: 97.6084% ( 11) 00:14:29.469 13047.622 - 13107.200: 97.6562% ( 6) 00:14:29.469 13107.200 - 13166.778: 97.7121% ( 7) 00:14:29.469 13166.778 - 13226.356: 97.7519% ( 5) 00:14:29.469 13226.356 - 13285.935: 97.8237% ( 9) 00:14:29.469 13285.935 - 13345.513: 97.8795% ( 7) 00:14:29.469 13345.513 - 13405.091: 97.9353% ( 7) 00:14:29.469 13405.091 - 13464.669: 98.0150% ( 10) 00:14:29.469 13464.669 - 13524.247: 98.0788% ( 8) 00:14:29.469 13524.247 - 13583.825: 98.1425% ( 8) 00:14:29.469 13583.825 - 13643.404: 98.2063% ( 8) 00:14:29.469 13643.404 - 13702.982: 98.2701% ( 8) 00:14:29.469 13702.982 - 13762.560: 98.3339% ( 8) 00:14:29.469 13762.560 - 13822.138: 98.3897% ( 7) 00:14:29.469 13822.138 - 13881.716: 98.4534% ( 8) 00:14:29.469 13881.716 - 13941.295: 98.4933% ( 5) 00:14:29.469 13941.295 - 14000.873: 98.5491% ( 7) 00:14:29.469 14000.873 - 14060.451: 98.5890% ( 5) 00:14:29.469 14060.451 - 14120.029: 98.6209% ( 4) 00:14:29.469 14120.029 - 14179.607: 98.6687% ( 6) 00:14:29.469 14179.607 - 14239.185: 98.7085% ( 5) 00:14:29.469 14239.185 - 14298.764: 98.7564% ( 6) 00:14:29.469 14298.764 - 14358.342: 98.7962% ( 5) 00:14:29.469 14358.342 - 14417.920: 98.8281% ( 4) 00:14:29.469 14417.920 - 14477.498: 98.8600% ( 4) 00:14:29.469 14477.498 - 14537.076: 98.8760% ( 2) 00:14:29.469 14537.076 - 14596.655: 98.8919% ( 2) 00:14:29.469 14596.655 - 14656.233: 98.9078% ( 2) 00:14:29.469 14656.233 - 14715.811: 98.9238% ( 2) 00:14:29.469 14715.811 - 14775.389: 98.9397% ( 2) 00:14:29.469 14775.389 - 14834.967: 98.9557% ( 2) 00:14:29.469 14834.967 - 14894.545: 98.9716% ( 2) 00:14:29.469 14894.545 - 14954.124: 98.9796% ( 1) 00:14:29.469 21924.771 - 22043.927: 98.9876% ( 1) 00:14:29.469 22043.927 - 22163.084: 99.0035% ( 2) 00:14:29.469 22163.084 - 22282.240: 99.0274% ( 3) 00:14:29.469 22282.240 - 22401.396: 99.0513% ( 3) 00:14:29.469 22401.396 - 22520.553: 99.0753% ( 3) 00:14:29.469 22520.553 - 22639.709: 99.0992% ( 3) 00:14:29.469 22639.709 - 22758.865: 99.1231% ( 3) 00:14:29.469 22758.865 - 22878.022: 99.1470% ( 3) 00:14:29.469 22878.022 - 22997.178: 99.1709% ( 3) 00:14:29.469 22997.178 - 23116.335: 99.1948% ( 3) 00:14:29.469 23116.335 - 23235.491: 99.2188% ( 3) 00:14:29.469 23235.491 - 23354.647: 99.2427% ( 3) 00:14:29.469 23354.647 - 23473.804: 99.2666% ( 3) 00:14:29.469 23473.804 - 23592.960: 99.2825% ( 2) 00:14:29.469 23592.960 - 23712.116: 99.3064% ( 3) 00:14:29.469 23712.116 - 23831.273: 99.3304% ( 3) 00:14:29.469 23831.273 - 23950.429: 99.3543% ( 3) 00:14:29.469 23950.429 - 24069.585: 99.3782% ( 3) 00:14:29.469 24069.585 - 24188.742: 99.4021% ( 3) 00:14:29.469 24188.742 - 24307.898: 99.4260% ( 3) 00:14:29.469 24307.898 - 24427.055: 99.4499% ( 3) 00:14:29.469 24427.055 - 24546.211: 99.4739% ( 3) 00:14:29.469 24546.211 - 24665.367: 99.4898% ( 2) 00:14:29.469 29431.622 - 29550.778: 99.4978% ( 1) 00:14:29.469 29550.778 - 29669.935: 99.5217% ( 3) 00:14:29.469 29669.935 - 29789.091: 99.5456% ( 3) 00:14:29.469 29789.091 - 29908.247: 99.5615% ( 2) 00:14:29.469 29908.247 - 30027.404: 99.5855% ( 3) 00:14:29.469 30027.404 - 30146.560: 99.6014% ( 2) 00:14:29.469 30146.560 - 30265.716: 99.6253% ( 3) 00:14:29.469 30265.716 - 30384.873: 99.6492% ( 3) 00:14:29.469 30384.873 - 30504.029: 99.6732% ( 3) 00:14:29.469 30504.029 - 30742.342: 99.7210% ( 6) 00:14:29.469 30742.342 - 30980.655: 99.7608% ( 5) 00:14:29.469 30980.655 - 31218.967: 99.8166% ( 7) 00:14:29.469 31218.967 - 31457.280: 99.8565% ( 5) 00:14:29.469 31457.280 - 31695.593: 99.9043% ( 6) 00:14:29.469 31695.593 - 31933.905: 99.9522% ( 6) 00:14:29.469 31933.905 - 32172.218: 100.0000% ( 6) 00:14:29.469 00:14:29.469 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:14:29.469 ============================================================================== 00:14:29.469 Range in us Cumulative IO count 00:14:29.469 7923.898 - 7983.476: 0.0239% ( 3) 00:14:29.469 7983.476 - 8043.055: 0.0558% ( 4) 00:14:29.470 8043.055 - 8102.633: 0.0877% ( 4) 00:14:29.470 8102.633 - 8162.211: 0.1435% ( 7) 00:14:29.470 8162.211 - 8221.789: 0.3268% ( 23) 00:14:29.470 8221.789 - 8281.367: 0.6776% ( 44) 00:14:29.470 8281.367 - 8340.945: 1.1240% ( 56) 00:14:29.470 8340.945 - 8400.524: 1.7140% ( 74) 00:14:29.470 8400.524 - 8460.102: 2.4075% ( 87) 00:14:29.470 8460.102 - 8519.680: 3.2366% ( 104) 00:14:29.470 8519.680 - 8579.258: 4.2650% ( 129) 00:14:29.470 8579.258 - 8638.836: 5.5166% ( 157) 00:14:29.470 8638.836 - 8698.415: 6.9754% ( 183) 00:14:29.470 8698.415 - 8757.993: 8.4104% ( 180) 00:14:29.470 8757.993 - 8817.571: 9.9968% ( 199) 00:14:29.470 8817.571 - 8877.149: 11.8543% ( 233) 00:14:29.470 8877.149 - 8936.727: 13.6958% ( 231) 00:14:29.470 8936.727 - 8996.305: 15.5692% ( 235) 00:14:29.470 8996.305 - 9055.884: 17.4984% ( 242) 00:14:29.470 9055.884 - 9115.462: 19.3957% ( 238) 00:14:29.470 9115.462 - 9175.040: 21.3648% ( 247) 00:14:29.470 9175.040 - 9234.618: 23.1983% ( 230) 00:14:29.470 9234.618 - 9294.196: 24.9920% ( 225) 00:14:29.470 9294.196 - 9353.775: 26.8973% ( 239) 00:14:29.470 9353.775 - 9413.353: 28.8664% ( 247) 00:14:29.470 9413.353 - 9472.931: 31.0666% ( 276) 00:14:29.470 9472.931 - 9532.509: 33.4104% ( 294) 00:14:29.470 9532.509 - 9592.087: 35.9295% ( 316) 00:14:29.470 9592.087 - 9651.665: 38.4726% ( 319) 00:14:29.470 9651.665 - 9711.244: 40.8960% ( 304) 00:14:29.470 9711.244 - 9770.822: 43.4869% ( 325) 00:14:29.470 9770.822 - 9830.400: 46.1177% ( 330) 00:14:29.470 9830.400 - 9889.978: 48.7006% ( 324) 00:14:29.470 9889.978 - 9949.556: 51.3951% ( 338) 00:14:29.470 9949.556 - 10009.135: 54.0497% ( 333) 00:14:29.470 10009.135 - 10068.713: 56.8320% ( 349) 00:14:29.470 10068.713 - 10128.291: 59.4786% ( 332) 00:14:29.470 10128.291 - 10187.869: 62.1333% ( 333) 00:14:29.470 10187.869 - 10247.447: 64.8039% ( 335) 00:14:29.470 10247.447 - 10307.025: 67.4426% ( 331) 00:14:29.470 10307.025 - 10366.604: 70.1610% ( 341) 00:14:29.470 10366.604 - 10426.182: 72.9034% ( 344) 00:14:29.470 10426.182 - 10485.760: 75.5102% ( 327) 00:14:29.470 10485.760 - 10545.338: 78.1011% ( 325) 00:14:29.470 10545.338 - 10604.916: 80.5325% ( 305) 00:14:29.470 10604.916 - 10664.495: 82.6770% ( 269) 00:14:29.470 10664.495 - 10724.073: 84.5026% ( 229) 00:14:29.470 10724.073 - 10783.651: 86.0491% ( 194) 00:14:29.470 10783.651 - 10843.229: 87.4123% ( 171) 00:14:29.470 10843.229 - 10902.807: 88.5124% ( 138) 00:14:29.470 10902.807 - 10962.385: 89.4372% ( 116) 00:14:29.470 10962.385 - 11021.964: 90.2742% ( 105) 00:14:29.470 11021.964 - 11081.542: 90.9040% ( 79) 00:14:29.470 11081.542 - 11141.120: 91.4461% ( 68) 00:14:29.470 11141.120 - 11200.698: 91.8686% ( 53) 00:14:29.470 11200.698 - 11260.276: 92.2274% ( 45) 00:14:29.470 11260.276 - 11319.855: 92.5462% ( 40) 00:14:29.470 11319.855 - 11379.433: 92.8013% ( 32) 00:14:29.470 11379.433 - 11439.011: 93.0644% ( 33) 00:14:29.470 11439.011 - 11498.589: 93.2637% ( 25) 00:14:29.470 11498.589 - 11558.167: 93.4710% ( 26) 00:14:29.470 11558.167 - 11617.745: 93.6623% ( 24) 00:14:29.470 11617.745 - 11677.324: 93.8297% ( 21) 00:14:29.470 11677.324 - 11736.902: 93.9812% ( 19) 00:14:29.470 11736.902 - 11796.480: 94.1645% ( 23) 00:14:29.470 11796.480 - 11856.058: 94.3479% ( 23) 00:14:29.470 11856.058 - 11915.636: 94.5153% ( 21) 00:14:29.470 11915.636 - 11975.215: 94.6907% ( 22) 00:14:29.470 11975.215 - 12034.793: 94.8820% ( 24) 00:14:29.470 12034.793 - 12094.371: 95.0574% ( 22) 00:14:29.470 12094.371 - 12153.949: 95.2248% ( 21) 00:14:29.470 12153.949 - 12213.527: 95.3842% ( 20) 00:14:29.470 12213.527 - 12273.105: 95.5676% ( 23) 00:14:29.470 12273.105 - 12332.684: 95.7350% ( 21) 00:14:29.470 12332.684 - 12392.262: 95.8945% ( 20) 00:14:29.470 12392.262 - 12451.840: 96.0539% ( 20) 00:14:29.470 12451.840 - 12511.418: 96.2054% ( 19) 00:14:29.470 12511.418 - 12570.996: 96.3728% ( 21) 00:14:29.470 12570.996 - 12630.575: 96.5402% ( 21) 00:14:29.470 12630.575 - 12690.153: 96.6996% ( 20) 00:14:29.470 12690.153 - 12749.731: 96.8511% ( 19) 00:14:29.470 12749.731 - 12809.309: 97.0265% ( 22) 00:14:29.470 12809.309 - 12868.887: 97.1540% ( 16) 00:14:29.470 12868.887 - 12928.465: 97.2656% ( 14) 00:14:29.470 12928.465 - 12988.044: 97.3772% ( 14) 00:14:29.470 12988.044 - 13047.622: 97.4490% ( 9) 00:14:29.470 13047.622 - 13107.200: 97.4888% ( 5) 00:14:29.470 13107.200 - 13166.778: 97.5287% ( 5) 00:14:29.470 13166.778 - 13226.356: 97.5686% ( 5) 00:14:29.470 13226.356 - 13285.935: 97.6244% ( 7) 00:14:29.470 13285.935 - 13345.513: 97.6961% ( 9) 00:14:29.470 13345.513 - 13405.091: 97.7599% ( 8) 00:14:29.470 13405.091 - 13464.669: 97.8237% ( 8) 00:14:29.470 13464.669 - 13524.247: 97.9114% ( 11) 00:14:29.470 13524.247 - 13583.825: 97.9751% ( 8) 00:14:29.470 13583.825 - 13643.404: 98.0389% ( 8) 00:14:29.470 13643.404 - 13702.982: 98.1107% ( 9) 00:14:29.470 13702.982 - 13762.560: 98.1824% ( 9) 00:14:29.470 13762.560 - 13822.138: 98.2462% ( 8) 00:14:29.470 13822.138 - 13881.716: 98.3020% ( 7) 00:14:29.470 13881.716 - 13941.295: 98.3658% ( 8) 00:14:29.470 13941.295 - 14000.873: 98.4216% ( 7) 00:14:29.470 14000.873 - 14060.451: 98.4614% ( 5) 00:14:29.470 14060.451 - 14120.029: 98.5013% ( 5) 00:14:29.470 14120.029 - 14179.607: 98.5491% ( 6) 00:14:29.470 14179.607 - 14239.185: 98.5890% ( 5) 00:14:29.470 14239.185 - 14298.764: 98.6288% ( 5) 00:14:29.470 14298.764 - 14358.342: 98.6767% ( 6) 00:14:29.470 14358.342 - 14417.920: 98.7165% ( 5) 00:14:29.470 14417.920 - 14477.498: 98.7484% ( 4) 00:14:29.470 14477.498 - 14537.076: 98.7723% ( 3) 00:14:29.470 14537.076 - 14596.655: 98.7803% ( 1) 00:14:29.470 14596.655 - 14656.233: 98.7962% ( 2) 00:14:29.470 14656.233 - 14715.811: 98.8122% ( 2) 00:14:29.470 14715.811 - 14775.389: 98.8281% ( 2) 00:14:29.470 14775.389 - 14834.967: 98.8441% ( 2) 00:14:29.470 14834.967 - 14894.545: 98.8600% ( 2) 00:14:29.470 14894.545 - 14954.124: 98.8760% ( 2) 00:14:29.470 14954.124 - 15013.702: 98.8999% ( 3) 00:14:29.470 15013.702 - 15073.280: 98.9158% ( 2) 00:14:29.470 15073.280 - 15132.858: 98.9318% ( 2) 00:14:29.470 15132.858 - 15192.436: 98.9477% ( 2) 00:14:29.470 15192.436 - 15252.015: 98.9636% ( 2) 00:14:29.470 15252.015 - 15371.171: 98.9796% ( 2) 00:14:29.470 18826.705 - 18945.862: 98.9955% ( 2) 00:14:29.470 18945.862 - 19065.018: 99.0195% ( 3) 00:14:29.470 19065.018 - 19184.175: 99.0513% ( 4) 00:14:29.470 19184.175 - 19303.331: 99.0673% ( 2) 00:14:29.470 19303.331 - 19422.487: 99.0832% ( 2) 00:14:29.470 19422.487 - 19541.644: 99.1071% ( 3) 00:14:29.470 19541.644 - 19660.800: 99.1231% ( 2) 00:14:29.470 19660.800 - 19779.956: 99.1550% ( 4) 00:14:29.470 19779.956 - 19899.113: 99.1709% ( 2) 00:14:29.470 19899.113 - 20018.269: 99.1948% ( 3) 00:14:29.470 20018.269 - 20137.425: 99.2188% ( 3) 00:14:29.470 20137.425 - 20256.582: 99.2427% ( 3) 00:14:29.470 20256.582 - 20375.738: 99.2586% ( 2) 00:14:29.470 20375.738 - 20494.895: 99.2825% ( 3) 00:14:29.470 20494.895 - 20614.051: 99.3064% ( 3) 00:14:29.470 20614.051 - 20733.207: 99.3224% ( 2) 00:14:29.470 20733.207 - 20852.364: 99.3463% ( 3) 00:14:29.470 20852.364 - 20971.520: 99.3702% ( 3) 00:14:29.470 20971.520 - 21090.676: 99.3782% ( 1) 00:14:29.470 21090.676 - 21209.833: 99.4021% ( 3) 00:14:29.470 21209.833 - 21328.989: 99.4260% ( 3) 00:14:29.470 21328.989 - 21448.145: 99.4499% ( 3) 00:14:29.470 21448.145 - 21567.302: 99.4818% ( 4) 00:14:29.470 21567.302 - 21686.458: 99.4898% ( 1) 00:14:29.470 26333.556 - 26452.713: 99.4978% ( 1) 00:14:29.470 26452.713 - 26571.869: 99.5217% ( 3) 00:14:29.470 26571.869 - 26691.025: 99.5456% ( 3) 00:14:29.470 26691.025 - 26810.182: 99.5695% ( 3) 00:14:29.470 26810.182 - 26929.338: 99.5934% ( 3) 00:14:29.470 26929.338 - 27048.495: 99.6173% ( 3) 00:14:29.471 27048.495 - 27167.651: 99.6333% ( 2) 00:14:29.471 27167.651 - 27286.807: 99.6572% ( 3) 00:14:29.471 27286.807 - 27405.964: 99.6811% ( 3) 00:14:29.471 27405.964 - 27525.120: 99.7050% ( 3) 00:14:29.471 27525.120 - 27644.276: 99.7290% ( 3) 00:14:29.471 27644.276 - 27763.433: 99.7529% ( 3) 00:14:29.471 27763.433 - 27882.589: 99.7768% ( 3) 00:14:29.471 27882.589 - 28001.745: 99.8007% ( 3) 00:14:29.471 28001.745 - 28120.902: 99.8246% ( 3) 00:14:29.471 28120.902 - 28240.058: 99.8485% ( 3) 00:14:29.471 28240.058 - 28359.215: 99.8724% ( 3) 00:14:29.471 28359.215 - 28478.371: 99.8964% ( 3) 00:14:29.471 28478.371 - 28597.527: 99.9203% ( 3) 00:14:29.471 28597.527 - 28716.684: 99.9442% ( 3) 00:14:29.471 28716.684 - 28835.840: 99.9681% ( 3) 00:14:29.471 28835.840 - 28954.996: 99.9920% ( 3) 00:14:29.471 28954.996 - 29074.153: 100.0000% ( 1) 00:14:29.471 00:14:29.729 10:06:00 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:14:31.104 Initializing NVMe Controllers 00:14:31.104 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:31.104 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:14:31.104 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:14:31.104 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:14:31.104 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:14:31.104 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:14:31.104 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:14:31.104 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:14:31.104 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:14:31.104 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:14:31.104 Initialization complete. Launching workers. 00:14:31.104 ======================================================== 00:14:31.104 Latency(us) 00:14:31.104 Device Information : IOPS MiB/s Average min max 00:14:31.104 PCIE (0000:00:10.0) NSID 1 from core 0: 11253.23 131.87 11402.09 8447.50 48209.68 00:14:31.104 PCIE (0000:00:11.0) NSID 1 from core 0: 11253.23 131.87 11371.80 8809.56 44931.89 00:14:31.104 PCIE (0000:00:13.0) NSID 1 from core 0: 11253.23 131.87 11340.62 8546.19 42651.83 00:14:31.104 PCIE (0000:00:12.0) NSID 1 from core 0: 11253.23 131.87 11308.90 8608.63 39437.77 00:14:31.104 PCIE (0000:00:12.0) NSID 2 from core 0: 11317.17 132.62 11213.27 8756.10 30020.00 00:14:31.104 PCIE (0000:00:12.0) NSID 3 from core 0: 11317.17 132.62 11181.26 8666.59 26775.05 00:14:31.104 ======================================================== 00:14:31.104 Total : 67647.26 792.74 11302.79 8447.50 48209.68 00:14:31.104 00:14:31.104 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:14:31.104 ================================================================================= 00:14:31.104 1.00000% : 9055.884us 00:14:31.104 10.00000% : 10009.135us 00:14:31.104 25.00000% : 10485.760us 00:14:31.104 50.00000% : 11021.964us 00:14:31.104 75.00000% : 11617.745us 00:14:31.104 90.00000% : 12273.105us 00:14:31.104 95.00000% : 12809.309us 00:14:31.104 98.00000% : 14298.764us 00:14:31.104 99.00000% : 38130.036us 00:14:31.104 99.50000% : 45756.044us 00:14:31.104 99.90000% : 47900.858us 00:14:31.104 99.99000% : 48139.171us 00:14:31.104 99.99900% : 48377.484us 00:14:31.104 99.99990% : 48377.484us 00:14:31.104 99.99999% : 48377.484us 00:14:31.104 00:14:31.104 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:14:31.104 ================================================================================= 00:14:31.104 1.00000% : 9175.040us 00:14:31.104 10.00000% : 10187.869us 00:14:31.104 25.00000% : 10604.916us 00:14:31.104 50.00000% : 11021.964us 00:14:31.104 75.00000% : 11498.589us 00:14:31.104 90.00000% : 12094.371us 00:14:31.104 95.00000% : 12630.575us 00:14:31.104 98.00000% : 14417.920us 00:14:31.104 99.00000% : 35031.971us 00:14:31.104 99.50000% : 42657.978us 00:14:31.104 99.90000% : 44564.480us 00:14:31.104 99.99000% : 45041.105us 00:14:31.104 99.99900% : 45041.105us 00:14:31.104 99.99990% : 45041.105us 00:14:31.104 99.99999% : 45041.105us 00:14:31.104 00:14:31.104 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:14:31.104 ================================================================================= 00:14:31.104 1.00000% : 9115.462us 00:14:31.104 10.00000% : 10128.291us 00:14:31.104 25.00000% : 10604.916us 00:14:31.104 50.00000% : 11021.964us 00:14:31.104 75.00000% : 11498.589us 00:14:31.104 90.00000% : 12094.371us 00:14:31.104 95.00000% : 12749.731us 00:14:31.104 98.00000% : 14298.764us 00:14:31.104 99.00000% : 32648.844us 00:14:31.104 99.50000% : 40513.164us 00:14:31.104 99.90000% : 42419.665us 00:14:31.104 99.99000% : 42657.978us 00:14:31.104 99.99900% : 42657.978us 00:14:31.104 99.99990% : 42657.978us 00:14:31.104 99.99999% : 42657.978us 00:14:31.104 00:14:31.104 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:14:31.104 ================================================================================= 00:14:31.104 1.00000% : 9055.884us 00:14:31.104 10.00000% : 10187.869us 00:14:31.104 25.00000% : 10604.916us 00:14:31.104 50.00000% : 11021.964us 00:14:31.104 75.00000% : 11498.589us 00:14:31.104 90.00000% : 12094.371us 00:14:31.105 95.00000% : 12749.731us 00:14:31.105 98.00000% : 14358.342us 00:14:31.105 99.00000% : 29312.465us 00:14:31.105 99.50000% : 37176.785us 00:14:31.105 99.90000% : 39083.287us 00:14:31.105 99.99000% : 39559.913us 00:14:31.105 99.99900% : 39559.913us 00:14:31.105 99.99990% : 39559.913us 00:14:31.105 99.99999% : 39559.913us 00:14:31.105 00:14:31.105 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:14:31.105 ================================================================================= 00:14:31.105 1.00000% : 9115.462us 00:14:31.105 10.00000% : 10128.291us 00:14:31.105 25.00000% : 10604.916us 00:14:31.105 50.00000% : 11021.964us 00:14:31.105 75.00000% : 11558.167us 00:14:31.105 90.00000% : 12094.371us 00:14:31.105 95.00000% : 12749.731us 00:14:31.105 98.00000% : 14656.233us 00:14:31.105 99.00000% : 20018.269us 00:14:31.105 99.50000% : 27763.433us 00:14:31.105 99.90000% : 29669.935us 00:14:31.105 99.99000% : 30027.404us 00:14:31.105 99.99900% : 30027.404us 00:14:31.105 99.99990% : 30027.404us 00:14:31.105 99.99999% : 30027.404us 00:14:31.105 00:14:31.105 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:14:31.105 ================================================================================= 00:14:31.105 1.00000% : 8996.305us 00:14:31.105 10.00000% : 10187.869us 00:14:31.105 25.00000% : 10604.916us 00:14:31.105 50.00000% : 11021.964us 00:14:31.105 75.00000% : 11498.589us 00:14:31.105 90.00000% : 12153.949us 00:14:31.105 95.00000% : 12749.731us 00:14:31.105 98.00000% : 14775.389us 00:14:31.105 99.00000% : 17158.516us 00:14:31.105 99.50000% : 22520.553us 00:14:31.105 99.90000% : 26333.556us 00:14:31.105 99.99000% : 26810.182us 00:14:31.105 99.99900% : 26810.182us 00:14:31.105 99.99990% : 26810.182us 00:14:31.105 99.99999% : 26810.182us 00:14:31.105 00:14:31.105 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:14:31.105 ============================================================================== 00:14:31.105 Range in us Cumulative IO count 00:14:31.105 8400.524 - 8460.102: 0.0355% ( 4) 00:14:31.105 8460.102 - 8519.680: 0.0444% ( 1) 00:14:31.105 8519.680 - 8579.258: 0.0710% ( 3) 00:14:31.105 8579.258 - 8638.836: 0.1065% ( 4) 00:14:31.105 8638.836 - 8698.415: 0.1598% ( 6) 00:14:31.105 8698.415 - 8757.993: 0.3018% ( 16) 00:14:31.105 8757.993 - 8817.571: 0.3817% ( 9) 00:14:31.105 8817.571 - 8877.149: 0.4439% ( 7) 00:14:31.105 8877.149 - 8936.727: 0.6126% ( 19) 00:14:31.105 8936.727 - 8996.305: 0.9411% ( 37) 00:14:31.105 8996.305 - 9055.884: 1.3938% ( 51) 00:14:31.105 9055.884 - 9115.462: 1.9087% ( 58) 00:14:31.105 9115.462 - 9175.040: 2.2816% ( 42) 00:14:31.105 9175.040 - 9234.618: 2.7965% ( 58) 00:14:31.105 9234.618 - 9294.196: 3.0717% ( 31) 00:14:31.105 9294.196 - 9353.775: 3.3647% ( 33) 00:14:31.105 9353.775 - 9413.353: 3.7731% ( 46) 00:14:31.105 9413.353 - 9472.931: 4.1637% ( 44) 00:14:31.105 9472.931 - 9532.509: 4.5987% ( 49) 00:14:31.105 9532.509 - 9592.087: 4.9716% ( 42) 00:14:31.105 9592.087 - 9651.665: 5.6996% ( 82) 00:14:31.105 9651.665 - 9711.244: 6.2411% ( 61) 00:14:31.105 9711.244 - 9770.822: 6.9336% ( 78) 00:14:31.105 9770.822 - 9830.400: 7.6616% ( 82) 00:14:31.105 9830.400 - 9889.978: 8.5494% ( 100) 00:14:31.105 9889.978 - 9949.556: 9.2596% ( 80) 00:14:31.105 9949.556 - 10009.135: 10.3249% ( 120) 00:14:31.105 10009.135 - 10068.713: 11.7365% ( 159) 00:14:31.105 10068.713 - 10128.291: 13.3256% ( 179) 00:14:31.105 10128.291 - 10187.869: 15.0036% ( 189) 00:14:31.105 10187.869 - 10247.447: 16.6193% ( 182) 00:14:31.105 10247.447 - 10307.025: 18.5636% ( 219) 00:14:31.105 10307.025 - 10366.604: 20.7830% ( 250) 00:14:31.105 10366.604 - 10426.182: 23.1623% ( 268) 00:14:31.105 10426.182 - 10485.760: 25.5415% ( 268) 00:14:31.105 10485.760 - 10545.338: 28.3026% ( 311) 00:14:31.105 10545.338 - 10604.916: 30.9215% ( 295) 00:14:31.105 10604.916 - 10664.495: 33.6559% ( 308) 00:14:31.105 10664.495 - 10724.073: 36.0884% ( 274) 00:14:31.105 10724.073 - 10783.651: 39.0181% ( 330) 00:14:31.105 10783.651 - 10843.229: 42.0188% ( 338) 00:14:31.105 10843.229 - 10902.807: 45.1527% ( 353) 00:14:31.105 10902.807 - 10962.385: 48.0469% ( 326) 00:14:31.105 10962.385 - 11021.964: 51.1275% ( 347) 00:14:31.105 11021.964 - 11081.542: 53.7109% ( 291) 00:14:31.105 11081.542 - 11141.120: 56.5341% ( 318) 00:14:31.105 11141.120 - 11200.698: 59.4105% ( 324) 00:14:31.105 11200.698 - 11260.276: 62.4556% ( 343) 00:14:31.105 11260.276 - 11319.855: 65.2876% ( 319) 00:14:31.105 11319.855 - 11379.433: 68.0309% ( 309) 00:14:31.105 11379.433 - 11439.011: 70.5522% ( 284) 00:14:31.105 11439.011 - 11498.589: 72.7539% ( 248) 00:14:31.105 11498.589 - 11558.167: 74.8935% ( 241) 00:14:31.105 11558.167 - 11617.745: 76.9531% ( 232) 00:14:31.105 11617.745 - 11677.324: 78.6310% ( 189) 00:14:31.105 11677.324 - 11736.902: 80.5842% ( 220) 00:14:31.105 11736.902 - 11796.480: 82.1200% ( 173) 00:14:31.105 11796.480 - 11856.058: 83.5760% ( 164) 00:14:31.105 11856.058 - 11915.636: 84.8189% ( 140) 00:14:31.105 11915.636 - 11975.215: 85.9819% ( 131) 00:14:31.105 11975.215 - 12034.793: 86.9496% ( 109) 00:14:31.105 12034.793 - 12094.371: 87.9439% ( 112) 00:14:31.105 12094.371 - 12153.949: 88.8228% ( 99) 00:14:31.105 12153.949 - 12213.527: 89.6484% ( 93) 00:14:31.105 12213.527 - 12273.105: 90.5273% ( 99) 00:14:31.105 12273.105 - 12332.684: 91.3885% ( 97) 00:14:31.105 12332.684 - 12392.262: 92.2141% ( 93) 00:14:31.105 12392.262 - 12451.840: 92.9865% ( 87) 00:14:31.105 12451.840 - 12511.418: 93.5014% ( 58) 00:14:31.105 12511.418 - 12570.996: 93.9364% ( 49) 00:14:31.105 12570.996 - 12630.575: 94.3892% ( 51) 00:14:31.105 12630.575 - 12690.153: 94.6555% ( 30) 00:14:31.105 12690.153 - 12749.731: 94.9130% ( 29) 00:14:31.105 12749.731 - 12809.309: 95.1438% ( 26) 00:14:31.105 12809.309 - 12868.887: 95.4545% ( 35) 00:14:31.105 12868.887 - 12928.465: 95.7298% ( 31) 00:14:31.105 12928.465 - 12988.044: 95.9783% ( 28) 00:14:31.105 12988.044 - 13047.622: 96.2891% ( 35) 00:14:31.105 13047.622 - 13107.200: 96.5110% ( 25) 00:14:31.105 13107.200 - 13166.778: 96.6886% ( 20) 00:14:31.105 13166.778 - 13226.356: 96.8484% ( 18) 00:14:31.105 13226.356 - 13285.935: 96.9194% ( 8) 00:14:31.105 13285.935 - 13345.513: 96.9638% ( 5) 00:14:31.105 13345.513 - 13405.091: 97.0259% ( 7) 00:14:31.105 13405.091 - 13464.669: 97.0969% ( 8) 00:14:31.105 13464.669 - 13524.247: 97.2124% ( 13) 00:14:31.105 13524.247 - 13583.825: 97.2834% ( 8) 00:14:31.105 13583.825 - 13643.404: 97.3455% ( 7) 00:14:31.105 13643.404 - 13702.982: 97.3988% ( 6) 00:14:31.105 13702.982 - 13762.560: 97.4254% ( 3) 00:14:31.105 13762.560 - 13822.138: 97.5231% ( 11) 00:14:31.105 13822.138 - 13881.716: 97.5586% ( 4) 00:14:31.105 13881.716 - 13941.295: 97.6119% ( 6) 00:14:31.105 13941.295 - 14000.873: 97.6474% ( 4) 00:14:31.105 14000.873 - 14060.451: 97.6562% ( 1) 00:14:31.105 14060.451 - 14120.029: 97.7983% ( 16) 00:14:31.105 14120.029 - 14179.607: 97.8693% ( 8) 00:14:31.105 14179.607 - 14239.185: 97.9492% ( 9) 00:14:31.105 14239.185 - 14298.764: 98.0114% ( 7) 00:14:31.105 14298.764 - 14358.342: 98.0735% ( 7) 00:14:31.105 14358.342 - 14417.920: 98.1534% ( 9) 00:14:31.105 14417.920 - 14477.498: 98.2067% ( 6) 00:14:31.105 14477.498 - 14537.076: 98.2688% ( 7) 00:14:31.105 14537.076 - 14596.655: 98.3043% ( 4) 00:14:31.105 14596.655 - 14656.233: 98.3576% ( 6) 00:14:31.105 14656.233 - 14715.811: 98.3754% ( 2) 00:14:31.105 14715.811 - 14775.389: 98.4020% ( 3) 00:14:31.105 14775.389 - 14834.967: 98.4553% ( 6) 00:14:31.105 14834.967 - 14894.545: 98.4730% ( 2) 00:14:31.105 14894.545 - 14954.124: 98.5174% ( 5) 00:14:31.105 14954.124 - 15013.702: 98.5618% ( 5) 00:14:31.105 15013.702 - 15073.280: 98.6062% ( 5) 00:14:31.105 15073.280 - 15132.858: 98.6328% ( 3) 00:14:31.105 15132.858 - 15192.436: 98.6861% ( 6) 00:14:31.105 15192.436 - 15252.015: 98.7216% ( 4) 00:14:31.105 15252.015 - 15371.171: 98.7571% ( 4) 00:14:31.105 15371.171 - 15490.327: 98.8104% ( 6) 00:14:31.105 15490.327 - 15609.484: 98.8370% ( 3) 00:14:31.105 15609.484 - 15728.640: 98.8636% ( 3) 00:14:31.105 36938.473 - 37176.785: 98.8725% ( 1) 00:14:31.105 37176.785 - 37415.098: 98.9080% ( 4) 00:14:31.105 37415.098 - 37653.411: 98.9613% ( 6) 00:14:31.105 37653.411 - 37891.724: 98.9968% ( 4) 00:14:31.105 37891.724 - 38130.036: 99.0589% ( 7) 00:14:31.105 38130.036 - 38368.349: 99.1033% ( 5) 00:14:31.105 38368.349 - 38606.662: 99.1477% ( 5) 00:14:31.105 38606.662 - 38844.975: 99.1921% ( 5) 00:14:31.105 38844.975 - 39083.287: 99.2365% ( 5) 00:14:31.105 39083.287 - 39321.600: 99.2898% ( 6) 00:14:31.105 39321.600 - 39559.913: 99.3253% ( 4) 00:14:31.105 39559.913 - 39798.225: 99.3874% ( 7) 00:14:31.105 39798.225 - 40036.538: 99.4318% ( 5) 00:14:31.105 45279.418 - 45517.731: 99.4673% ( 4) 00:14:31.105 45517.731 - 45756.044: 99.5206% ( 6) 00:14:31.105 45756.044 - 45994.356: 99.5650% ( 5) 00:14:31.105 45994.356 - 46232.669: 99.6094% ( 5) 00:14:31.106 46232.669 - 46470.982: 99.6626% ( 6) 00:14:31.106 46470.982 - 46709.295: 99.7070% ( 5) 00:14:31.106 46709.295 - 46947.607: 99.7425% ( 4) 00:14:31.106 46947.607 - 47185.920: 99.7958% ( 6) 00:14:31.106 47185.920 - 47424.233: 99.8402% ( 5) 00:14:31.106 47424.233 - 47662.545: 99.8846% ( 5) 00:14:31.106 47662.545 - 47900.858: 99.9379% ( 6) 00:14:31.106 47900.858 - 48139.171: 99.9911% ( 6) 00:14:31.106 48139.171 - 48377.484: 100.0000% ( 1) 00:14:31.106 00:14:31.106 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:14:31.106 ============================================================================== 00:14:31.106 Range in us Cumulative IO count 00:14:31.106 8757.993 - 8817.571: 0.0089% ( 1) 00:14:31.106 8817.571 - 8877.149: 0.0533% ( 5) 00:14:31.106 8877.149 - 8936.727: 0.1243% ( 8) 00:14:31.106 8936.727 - 8996.305: 0.2841% ( 18) 00:14:31.106 8996.305 - 9055.884: 0.5149% ( 26) 00:14:31.106 9055.884 - 9115.462: 0.9055% ( 44) 00:14:31.106 9115.462 - 9175.040: 1.5270% ( 70) 00:14:31.106 9175.040 - 9234.618: 2.0508% ( 59) 00:14:31.106 9234.618 - 9294.196: 2.3970% ( 39) 00:14:31.106 9294.196 - 9353.775: 2.8587% ( 52) 00:14:31.106 9353.775 - 9413.353: 3.4712% ( 69) 00:14:31.106 9413.353 - 9472.931: 3.9240% ( 51) 00:14:31.106 9472.931 - 9532.509: 4.5543% ( 71) 00:14:31.106 9532.509 - 9592.087: 5.1758% ( 70) 00:14:31.106 9592.087 - 9651.665: 5.6463% ( 53) 00:14:31.106 9651.665 - 9711.244: 6.1435% ( 56) 00:14:31.106 9711.244 - 9770.822: 6.6229% ( 54) 00:14:31.106 9770.822 - 9830.400: 6.9513% ( 37) 00:14:31.106 9830.400 - 9889.978: 7.2177% ( 30) 00:14:31.106 9889.978 - 9949.556: 7.5195% ( 34) 00:14:31.106 9949.556 - 10009.135: 7.9634% ( 50) 00:14:31.106 10009.135 - 10068.713: 8.9577% ( 112) 00:14:31.106 10068.713 - 10128.291: 9.7035% ( 84) 00:14:31.106 10128.291 - 10187.869: 11.0085% ( 147) 00:14:31.106 10187.869 - 10247.447: 12.4645% ( 164) 00:14:31.106 10247.447 - 10307.025: 14.3466% ( 212) 00:14:31.106 10307.025 - 10366.604: 16.1754% ( 206) 00:14:31.106 10366.604 - 10426.182: 18.4038% ( 251) 00:14:31.106 10426.182 - 10485.760: 20.7830% ( 268) 00:14:31.106 10485.760 - 10545.338: 23.5795% ( 315) 00:14:31.106 10545.338 - 10604.916: 27.0508% ( 391) 00:14:31.106 10604.916 - 10664.495: 30.4332% ( 381) 00:14:31.106 10664.495 - 10724.073: 33.4606% ( 341) 00:14:31.106 10724.073 - 10783.651: 36.4080% ( 332) 00:14:31.106 10783.651 - 10843.229: 40.3143% ( 440) 00:14:31.106 10843.229 - 10902.807: 43.9364% ( 408) 00:14:31.106 10902.807 - 10962.385: 47.4876% ( 400) 00:14:31.106 10962.385 - 11021.964: 50.8434% ( 378) 00:14:31.106 11021.964 - 11081.542: 54.6964% ( 434) 00:14:31.106 11081.542 - 11141.120: 58.5405% ( 433) 00:14:31.106 11141.120 - 11200.698: 62.4556% ( 441) 00:14:31.106 11200.698 - 11260.276: 65.3853% ( 330) 00:14:31.106 11260.276 - 11319.855: 68.1108% ( 307) 00:14:31.106 11319.855 - 11379.433: 70.7031% ( 292) 00:14:31.106 11379.433 - 11439.011: 73.4020% ( 304) 00:14:31.106 11439.011 - 11498.589: 75.5682% ( 244) 00:14:31.106 11498.589 - 11558.167: 77.7699% ( 248) 00:14:31.106 11558.167 - 11617.745: 79.7053% ( 218) 00:14:31.106 11617.745 - 11677.324: 81.5785% ( 211) 00:14:31.106 11677.324 - 11736.902: 83.1676% ( 179) 00:14:31.106 11736.902 - 11796.480: 84.7124% ( 174) 00:14:31.106 11796.480 - 11856.058: 86.0884% ( 155) 00:14:31.106 11856.058 - 11915.636: 87.2337% ( 129) 00:14:31.106 11915.636 - 11975.215: 88.2369% ( 113) 00:14:31.106 11975.215 - 12034.793: 89.2489% ( 114) 00:14:31.106 12034.793 - 12094.371: 90.1012% ( 96) 00:14:31.106 12094.371 - 12153.949: 90.9268% ( 93) 00:14:31.106 12153.949 - 12213.527: 91.5305% ( 68) 00:14:31.106 12213.527 - 12273.105: 92.1342% ( 68) 00:14:31.106 12273.105 - 12332.684: 92.8445% ( 80) 00:14:31.106 12332.684 - 12392.262: 93.4482% ( 68) 00:14:31.106 12392.262 - 12451.840: 93.9897% ( 61) 00:14:31.106 12451.840 - 12511.418: 94.3981% ( 46) 00:14:31.106 12511.418 - 12570.996: 94.7532% ( 40) 00:14:31.106 12570.996 - 12630.575: 95.2060% ( 51) 00:14:31.106 12630.575 - 12690.153: 95.4989% ( 33) 00:14:31.106 12690.153 - 12749.731: 95.7120% ( 24) 00:14:31.106 12749.731 - 12809.309: 95.9339% ( 25) 00:14:31.106 12809.309 - 12868.887: 96.1204% ( 21) 00:14:31.106 12868.887 - 12928.465: 96.3335% ( 24) 00:14:31.106 12928.465 - 12988.044: 96.5110% ( 20) 00:14:31.106 12988.044 - 13047.622: 96.7063% ( 22) 00:14:31.106 13047.622 - 13107.200: 96.8217% ( 13) 00:14:31.106 13107.200 - 13166.778: 96.8839% ( 7) 00:14:31.106 13166.778 - 13226.356: 96.9638% ( 9) 00:14:31.106 13226.356 - 13285.935: 96.9993% ( 4) 00:14:31.106 13285.935 - 13345.513: 97.0437% ( 5) 00:14:31.106 13345.513 - 13405.091: 97.0881% ( 5) 00:14:31.106 13405.091 - 13464.669: 97.1325% ( 5) 00:14:31.106 13464.669 - 13524.247: 97.1857% ( 6) 00:14:31.106 13524.247 - 13583.825: 97.2212% ( 4) 00:14:31.106 13583.825 - 13643.404: 97.2479% ( 3) 00:14:31.106 13643.404 - 13702.982: 97.2656% ( 2) 00:14:31.106 13702.982 - 13762.560: 97.3011% ( 4) 00:14:31.106 13762.560 - 13822.138: 97.3189% ( 2) 00:14:31.106 13822.138 - 13881.716: 97.3633% ( 5) 00:14:31.106 13881.716 - 13941.295: 97.3810% ( 2) 00:14:31.106 13941.295 - 14000.873: 97.4165% ( 4) 00:14:31.106 14000.873 - 14060.451: 97.4254% ( 1) 00:14:31.106 14060.451 - 14120.029: 97.4521% ( 3) 00:14:31.106 14120.029 - 14179.607: 97.5231% ( 8) 00:14:31.106 14179.607 - 14239.185: 97.6030% ( 9) 00:14:31.106 14239.185 - 14298.764: 97.6918% ( 10) 00:14:31.106 14298.764 - 14358.342: 97.9403% ( 28) 00:14:31.106 14358.342 - 14417.920: 98.0558% ( 13) 00:14:31.106 14417.920 - 14477.498: 98.1179% ( 7) 00:14:31.106 14477.498 - 14537.076: 98.1889% ( 8) 00:14:31.106 14537.076 - 14596.655: 98.2333% ( 5) 00:14:31.106 14596.655 - 14656.233: 98.2866% ( 6) 00:14:31.106 14656.233 - 14715.811: 98.3665% ( 9) 00:14:31.106 14715.811 - 14775.389: 98.4375% ( 8) 00:14:31.106 14775.389 - 14834.967: 98.5174% ( 9) 00:14:31.106 14834.967 - 14894.545: 98.6239% ( 12) 00:14:31.106 14894.545 - 14954.124: 98.6861% ( 7) 00:14:31.106 14954.124 - 15013.702: 98.7216% ( 4) 00:14:31.106 15013.702 - 15073.280: 98.7660% ( 5) 00:14:31.106 15073.280 - 15132.858: 98.8015% ( 4) 00:14:31.106 15132.858 - 15192.436: 98.8281% ( 3) 00:14:31.106 15192.436 - 15252.015: 98.8548% ( 3) 00:14:31.106 15252.015 - 15371.171: 98.8636% ( 1) 00:14:31.106 34317.033 - 34555.345: 98.9080% ( 5) 00:14:31.106 34555.345 - 34793.658: 98.9613% ( 6) 00:14:31.106 34793.658 - 35031.971: 99.0057% ( 5) 00:14:31.106 35031.971 - 35270.284: 99.0589% ( 6) 00:14:31.106 35270.284 - 35508.596: 99.1122% ( 6) 00:14:31.106 35508.596 - 35746.909: 99.1566% ( 5) 00:14:31.106 35746.909 - 35985.222: 99.2099% ( 6) 00:14:31.106 35985.222 - 36223.535: 99.2631% ( 6) 00:14:31.106 36223.535 - 36461.847: 99.3164% ( 6) 00:14:31.106 36461.847 - 36700.160: 99.3608% ( 5) 00:14:31.106 36700.160 - 36938.473: 99.4141% ( 6) 00:14:31.106 36938.473 - 37176.785: 99.4318% ( 2) 00:14:31.106 42181.353 - 42419.665: 99.4585% ( 3) 00:14:31.106 42419.665 - 42657.978: 99.5117% ( 6) 00:14:31.106 42657.978 - 42896.291: 99.5650% ( 6) 00:14:31.106 42896.291 - 43134.604: 99.6094% ( 5) 00:14:31.106 43134.604 - 43372.916: 99.6626% ( 6) 00:14:31.106 43372.916 - 43611.229: 99.7159% ( 6) 00:14:31.106 43611.229 - 43849.542: 99.7603% ( 5) 00:14:31.106 43849.542 - 44087.855: 99.8136% ( 6) 00:14:31.106 44087.855 - 44326.167: 99.8668% ( 6) 00:14:31.106 44326.167 - 44564.480: 99.9112% ( 5) 00:14:31.106 44564.480 - 44802.793: 99.9645% ( 6) 00:14:31.106 44802.793 - 45041.105: 100.0000% ( 4) 00:14:31.106 00:14:31.106 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:14:31.106 ============================================================================== 00:14:31.106 Range in us Cumulative IO count 00:14:31.106 8519.680 - 8579.258: 0.0089% ( 1) 00:14:31.106 8638.836 - 8698.415: 0.0266% ( 2) 00:14:31.106 8757.993 - 8817.571: 0.0355% ( 1) 00:14:31.106 8817.571 - 8877.149: 0.1154% ( 9) 00:14:31.106 8877.149 - 8936.727: 0.2930% ( 20) 00:14:31.106 8936.727 - 8996.305: 0.5682% ( 31) 00:14:31.106 8996.305 - 9055.884: 0.9233% ( 40) 00:14:31.106 9055.884 - 9115.462: 1.3849% ( 52) 00:14:31.106 9115.462 - 9175.040: 1.7312% ( 39) 00:14:31.106 9175.040 - 9234.618: 2.0685% ( 38) 00:14:31.106 9234.618 - 9294.196: 2.3704% ( 34) 00:14:31.106 9294.196 - 9353.775: 2.7344% ( 41) 00:14:31.106 9353.775 - 9413.353: 3.1516% ( 47) 00:14:31.106 9413.353 - 9472.931: 3.7376% ( 66) 00:14:31.106 9472.931 - 9532.509: 4.2614% ( 59) 00:14:31.106 9532.509 - 9592.087: 4.7940% ( 60) 00:14:31.106 9592.087 - 9651.665: 5.5487% ( 85) 00:14:31.106 9651.665 - 9711.244: 5.9659% ( 47) 00:14:31.106 9711.244 - 9770.822: 6.3033% ( 38) 00:14:31.106 9770.822 - 9830.400: 6.6495% ( 39) 00:14:31.107 9830.400 - 9889.978: 7.0224% ( 42) 00:14:31.107 9889.978 - 9949.556: 7.5195% ( 56) 00:14:31.107 9949.556 - 10009.135: 8.2120% ( 78) 00:14:31.107 10009.135 - 10068.713: 8.9045% ( 78) 00:14:31.107 10068.713 - 10128.291: 10.0142% ( 125) 00:14:31.107 10128.291 - 10187.869: 11.3370% ( 149) 00:14:31.107 10187.869 - 10247.447: 13.1747% ( 207) 00:14:31.107 10247.447 - 10307.025: 14.6484% ( 166) 00:14:31.107 10307.025 - 10366.604: 16.8146% ( 244) 00:14:31.107 10366.604 - 10426.182: 19.0785% ( 255) 00:14:31.107 10426.182 - 10485.760: 21.5820% ( 282) 00:14:31.107 10485.760 - 10545.338: 24.0945% ( 283) 00:14:31.107 10545.338 - 10604.916: 26.9354% ( 320) 00:14:31.107 10604.916 - 10664.495: 30.3711% ( 387) 00:14:31.107 10664.495 - 10724.073: 33.9666% ( 405) 00:14:31.107 10724.073 - 10783.651: 36.9673% ( 338) 00:14:31.107 10783.651 - 10843.229: 40.0923% ( 352) 00:14:31.107 10843.229 - 10902.807: 43.6967% ( 406) 00:14:31.107 10902.807 - 10962.385: 48.0380% ( 489) 00:14:31.107 10962.385 - 11021.964: 51.7312% ( 416) 00:14:31.107 11021.964 - 11081.542: 54.6165% ( 325) 00:14:31.107 11081.542 - 11141.120: 58.0611% ( 388) 00:14:31.107 11141.120 - 11200.698: 61.1328% ( 346) 00:14:31.107 11200.698 - 11260.276: 64.2045% ( 346) 00:14:31.107 11260.276 - 11319.855: 67.0987% ( 326) 00:14:31.107 11319.855 - 11379.433: 70.7564% ( 412) 00:14:31.107 11379.433 - 11439.011: 73.7837% ( 341) 00:14:31.107 11439.011 - 11498.589: 76.2251% ( 275) 00:14:31.107 11498.589 - 11558.167: 78.1161% ( 213) 00:14:31.107 11558.167 - 11617.745: 79.9716% ( 209) 00:14:31.107 11617.745 - 11677.324: 81.7205% ( 197) 00:14:31.107 11677.324 - 11736.902: 83.5849% ( 210) 00:14:31.107 11736.902 - 11796.480: 85.0408% ( 164) 00:14:31.107 11796.480 - 11856.058: 86.2482% ( 136) 00:14:31.107 11856.058 - 11915.636: 87.3935% ( 129) 00:14:31.107 11915.636 - 11975.215: 88.4322% ( 117) 00:14:31.107 11975.215 - 12034.793: 89.4886% ( 119) 00:14:31.107 12034.793 - 12094.371: 90.3143% ( 93) 00:14:31.107 12094.371 - 12153.949: 91.0245% ( 80) 00:14:31.107 12153.949 - 12213.527: 91.7702% ( 84) 00:14:31.107 12213.527 - 12273.105: 92.4272% ( 74) 00:14:31.107 12273.105 - 12332.684: 93.0487% ( 70) 00:14:31.107 12332.684 - 12392.262: 93.3594% ( 35) 00:14:31.107 12392.262 - 12451.840: 93.7056% ( 39) 00:14:31.107 12451.840 - 12511.418: 94.0518% ( 39) 00:14:31.107 12511.418 - 12570.996: 94.3892% ( 38) 00:14:31.107 12570.996 - 12630.575: 94.6644% ( 31) 00:14:31.107 12630.575 - 12690.153: 94.9485% ( 32) 00:14:31.107 12690.153 - 12749.731: 95.3036% ( 40) 00:14:31.107 12749.731 - 12809.309: 95.6232% ( 36) 00:14:31.107 12809.309 - 12868.887: 95.8008% ( 20) 00:14:31.107 12868.887 - 12928.465: 95.9872% ( 21) 00:14:31.107 12928.465 - 12988.044: 96.1648% ( 20) 00:14:31.107 12988.044 - 13047.622: 96.3423% ( 20) 00:14:31.107 13047.622 - 13107.200: 96.5021% ( 18) 00:14:31.107 13107.200 - 13166.778: 96.5909% ( 10) 00:14:31.107 13166.778 - 13226.356: 96.7063% ( 13) 00:14:31.107 13226.356 - 13285.935: 96.7685% ( 7) 00:14:31.107 13285.935 - 13345.513: 96.8661% ( 11) 00:14:31.107 13345.513 - 13405.091: 96.9194% ( 6) 00:14:31.107 13405.091 - 13464.669: 96.9993% ( 9) 00:14:31.107 13464.669 - 13524.247: 97.0437% ( 5) 00:14:31.107 13524.247 - 13583.825: 97.0881% ( 5) 00:14:31.107 13583.825 - 13643.404: 97.1325% ( 5) 00:14:31.107 13643.404 - 13702.982: 97.1768% ( 5) 00:14:31.107 13702.982 - 13762.560: 97.2212% ( 5) 00:14:31.107 13762.560 - 13822.138: 97.2479% ( 3) 00:14:31.107 13822.138 - 13881.716: 97.3100% ( 7) 00:14:31.107 13881.716 - 13941.295: 97.3810% ( 8) 00:14:31.107 13941.295 - 14000.873: 97.4876% ( 12) 00:14:31.107 14000.873 - 14060.451: 97.6385% ( 17) 00:14:31.107 14060.451 - 14120.029: 97.7539% ( 13) 00:14:31.107 14120.029 - 14179.607: 97.8516% ( 11) 00:14:31.107 14179.607 - 14239.185: 97.9670% ( 13) 00:14:31.107 14239.185 - 14298.764: 98.3043% ( 38) 00:14:31.107 14298.764 - 14358.342: 98.4375% ( 15) 00:14:31.107 14358.342 - 14417.920: 98.5884% ( 17) 00:14:31.107 14417.920 - 14477.498: 98.6772% ( 10) 00:14:31.107 14477.498 - 14537.076: 98.7216% ( 5) 00:14:31.107 14537.076 - 14596.655: 98.7571% ( 4) 00:14:31.107 14596.655 - 14656.233: 98.7660% ( 1) 00:14:31.107 14656.233 - 14715.811: 98.7837% ( 2) 00:14:31.107 14715.811 - 14775.389: 98.8104% ( 3) 00:14:31.107 14775.389 - 14834.967: 98.8370% ( 3) 00:14:31.107 14834.967 - 14894.545: 98.8548% ( 2) 00:14:31.107 14894.545 - 14954.124: 98.8636% ( 1) 00:14:31.107 31695.593 - 31933.905: 98.9080% ( 5) 00:14:31.107 31933.905 - 32172.218: 98.9524% ( 5) 00:14:31.107 32172.218 - 32410.531: 98.9968% ( 5) 00:14:31.107 32410.531 - 32648.844: 99.0501% ( 6) 00:14:31.107 32648.844 - 32887.156: 99.0945% ( 5) 00:14:31.107 32887.156 - 33125.469: 99.1477% ( 6) 00:14:31.107 33125.469 - 33363.782: 99.1832% ( 4) 00:14:31.107 33363.782 - 33602.095: 99.2365% ( 6) 00:14:31.107 33602.095 - 33840.407: 99.2898% ( 6) 00:14:31.107 33840.407 - 34078.720: 99.3342% ( 5) 00:14:31.107 34078.720 - 34317.033: 99.3874% ( 6) 00:14:31.107 34317.033 - 34555.345: 99.4318% ( 5) 00:14:31.107 39798.225 - 40036.538: 99.4496% ( 2) 00:14:31.107 40036.538 - 40274.851: 99.4940% ( 5) 00:14:31.107 40274.851 - 40513.164: 99.5472% ( 6) 00:14:31.107 40513.164 - 40751.476: 99.5916% ( 5) 00:14:31.107 40751.476 - 40989.789: 99.6449% ( 6) 00:14:31.107 40989.789 - 41228.102: 99.6893% ( 5) 00:14:31.107 41228.102 - 41466.415: 99.7425% ( 6) 00:14:31.107 41466.415 - 41704.727: 99.7958% ( 6) 00:14:31.107 41704.727 - 41943.040: 99.8402% ( 5) 00:14:31.107 41943.040 - 42181.353: 99.8935% ( 6) 00:14:31.107 42181.353 - 42419.665: 99.9467% ( 6) 00:14:31.107 42419.665 - 42657.978: 100.0000% ( 6) 00:14:31.107 00:14:31.107 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:14:31.107 ============================================================================== 00:14:31.107 Range in us Cumulative IO count 00:14:31.107 8579.258 - 8638.836: 0.0089% ( 1) 00:14:31.107 8698.415 - 8757.993: 0.0178% ( 1) 00:14:31.107 8757.993 - 8817.571: 0.0888% ( 8) 00:14:31.107 8817.571 - 8877.149: 0.2308% ( 16) 00:14:31.107 8877.149 - 8936.727: 0.4261% ( 22) 00:14:31.107 8936.727 - 8996.305: 0.6303% ( 23) 00:14:31.107 8996.305 - 9055.884: 1.0032% ( 42) 00:14:31.107 9055.884 - 9115.462: 1.5714% ( 64) 00:14:31.107 9115.462 - 9175.040: 1.8999% ( 37) 00:14:31.107 9175.040 - 9234.618: 2.2461% ( 39) 00:14:31.107 9234.618 - 9294.196: 2.5568% ( 35) 00:14:31.107 9294.196 - 9353.775: 2.9474% ( 44) 00:14:31.107 9353.775 - 9413.353: 3.4624% ( 58) 00:14:31.107 9413.353 - 9472.931: 3.8530% ( 44) 00:14:31.107 9472.931 - 9532.509: 4.3058% ( 51) 00:14:31.107 9532.509 - 9592.087: 4.9183% ( 69) 00:14:31.107 9592.087 - 9651.665: 5.3445% ( 48) 00:14:31.107 9651.665 - 9711.244: 5.6463% ( 34) 00:14:31.107 9711.244 - 9770.822: 5.9659% ( 36) 00:14:31.107 9770.822 - 9830.400: 6.2766% ( 35) 00:14:31.107 9830.400 - 9889.978: 6.7116% ( 49) 00:14:31.107 9889.978 - 9949.556: 7.2088% ( 56) 00:14:31.107 9949.556 - 10009.135: 7.7859% ( 65) 00:14:31.107 10009.135 - 10068.713: 8.7092% ( 104) 00:14:31.107 10068.713 - 10128.291: 9.8988% ( 134) 00:14:31.107 10128.291 - 10187.869: 11.1683% ( 143) 00:14:31.107 10187.869 - 10247.447: 12.6598% ( 168) 00:14:31.107 10247.447 - 10307.025: 14.4531% ( 202) 00:14:31.107 10307.025 - 10366.604: 16.6193% ( 244) 00:14:31.107 10366.604 - 10426.182: 18.7234% ( 237) 00:14:31.107 10426.182 - 10485.760: 21.1559% ( 274) 00:14:31.107 10485.760 - 10545.338: 23.6772% ( 284) 00:14:31.107 10545.338 - 10604.916: 26.6246% ( 332) 00:14:31.107 10604.916 - 10664.495: 30.1935% ( 402) 00:14:31.107 10664.495 - 10724.073: 33.5760% ( 381) 00:14:31.107 10724.073 - 10783.651: 36.8075% ( 364) 00:14:31.107 10783.651 - 10843.229: 40.7493% ( 444) 00:14:31.107 10843.229 - 10902.807: 44.8331% ( 460) 00:14:31.107 10902.807 - 10962.385: 48.8370% ( 451) 00:14:31.107 10962.385 - 11021.964: 52.4325% ( 405) 00:14:31.107 11021.964 - 11081.542: 55.8949% ( 390) 00:14:31.107 11081.542 - 11141.120: 58.8778% ( 336) 00:14:31.107 11141.120 - 11200.698: 62.0739% ( 360) 00:14:31.107 11200.698 - 11260.276: 64.9059% ( 319) 00:14:31.107 11260.276 - 11319.855: 67.3739% ( 278) 00:14:31.107 11319.855 - 11379.433: 70.2592% ( 325) 00:14:31.107 11379.433 - 11439.011: 72.9759% ( 306) 00:14:31.107 11439.011 - 11498.589: 75.1864% ( 249) 00:14:31.107 11498.589 - 11558.167: 77.7344% ( 287) 00:14:31.107 11558.167 - 11617.745: 79.7408% ( 226) 00:14:31.107 11617.745 - 11677.324: 81.5341% ( 202) 00:14:31.107 11677.324 - 11736.902: 83.1587% ( 183) 00:14:31.107 11736.902 - 11796.480: 84.8810% ( 194) 00:14:31.107 11796.480 - 11856.058: 86.1417% ( 142) 00:14:31.107 11856.058 - 11915.636: 87.3224% ( 133) 00:14:31.107 11915.636 - 11975.215: 88.3256% ( 113) 00:14:31.107 11975.215 - 12034.793: 89.2844% ( 108) 00:14:31.107 12034.793 - 12094.371: 90.0479% ( 86) 00:14:31.107 12094.371 - 12153.949: 90.8647% ( 92) 00:14:31.107 12153.949 - 12213.527: 91.5661% ( 79) 00:14:31.107 12213.527 - 12273.105: 92.2763% ( 80) 00:14:31.108 12273.105 - 12332.684: 92.8267% ( 62) 00:14:31.108 12332.684 - 12392.262: 93.2706% ( 50) 00:14:31.108 12392.262 - 12451.840: 93.6612% ( 44) 00:14:31.108 12451.840 - 12511.418: 94.0785% ( 47) 00:14:31.108 12511.418 - 12570.996: 94.4070% ( 37) 00:14:31.108 12570.996 - 12630.575: 94.6911% ( 32) 00:14:31.108 12630.575 - 12690.153: 94.9840% ( 33) 00:14:31.108 12690.153 - 12749.731: 95.2592% ( 31) 00:14:31.108 12749.731 - 12809.309: 95.4901% ( 26) 00:14:31.108 12809.309 - 12868.887: 95.7386% ( 28) 00:14:31.108 12868.887 - 12928.465: 96.0227% ( 32) 00:14:31.108 12928.465 - 12988.044: 96.2180% ( 22) 00:14:31.108 12988.044 - 13047.622: 96.3068% ( 10) 00:14:31.108 13047.622 - 13107.200: 96.3690% ( 7) 00:14:31.108 13107.200 - 13166.778: 96.4577% ( 10) 00:14:31.108 13166.778 - 13226.356: 96.5376% ( 9) 00:14:31.108 13226.356 - 13285.935: 96.6264% ( 10) 00:14:31.108 13285.935 - 13345.513: 96.7152% ( 10) 00:14:31.108 13345.513 - 13405.091: 96.8129% ( 11) 00:14:31.108 13405.091 - 13464.669: 96.8750% ( 7) 00:14:31.108 13464.669 - 13524.247: 96.9105% ( 4) 00:14:31.108 13524.247 - 13583.825: 96.9283% ( 2) 00:14:31.108 13583.825 - 13643.404: 96.9549% ( 3) 00:14:31.108 13643.404 - 13702.982: 96.9904% ( 4) 00:14:31.108 13702.982 - 13762.560: 97.0437% ( 6) 00:14:31.108 13762.560 - 13822.138: 97.1147% ( 8) 00:14:31.108 13822.138 - 13881.716: 97.1946% ( 9) 00:14:31.108 13881.716 - 13941.295: 97.2745% ( 9) 00:14:31.108 13941.295 - 14000.873: 97.3366% ( 7) 00:14:31.108 14000.873 - 14060.451: 97.4254% ( 10) 00:14:31.108 14060.451 - 14120.029: 97.5497% ( 14) 00:14:31.108 14120.029 - 14179.607: 97.6385% ( 10) 00:14:31.108 14179.607 - 14239.185: 97.7362% ( 11) 00:14:31.108 14239.185 - 14298.764: 97.8693% ( 15) 00:14:31.108 14298.764 - 14358.342: 98.0202% ( 17) 00:14:31.108 14358.342 - 14417.920: 98.1623% ( 16) 00:14:31.108 14417.920 - 14477.498: 98.3398% ( 20) 00:14:31.108 14477.498 - 14537.076: 98.4286% ( 10) 00:14:31.108 14537.076 - 14596.655: 98.5440% ( 13) 00:14:31.108 14596.655 - 14656.233: 98.6239% ( 9) 00:14:31.108 14656.233 - 14715.811: 98.6861% ( 7) 00:14:31.108 14715.811 - 14775.389: 98.7305% ( 5) 00:14:31.108 14775.389 - 14834.967: 98.7571% ( 3) 00:14:31.108 14834.967 - 14894.545: 98.7837% ( 3) 00:14:31.108 14894.545 - 14954.124: 98.8015% ( 2) 00:14:31.108 14954.124 - 15013.702: 98.8281% ( 3) 00:14:31.108 15013.702 - 15073.280: 98.8548% ( 3) 00:14:31.108 15073.280 - 15132.858: 98.8636% ( 1) 00:14:31.108 28478.371 - 28597.527: 98.8725% ( 1) 00:14:31.108 28597.527 - 28716.684: 98.8991% ( 3) 00:14:31.108 28716.684 - 28835.840: 98.9169% ( 2) 00:14:31.108 28835.840 - 28954.996: 98.9435% ( 3) 00:14:31.108 28954.996 - 29074.153: 98.9702% ( 3) 00:14:31.108 29074.153 - 29193.309: 98.9879% ( 2) 00:14:31.108 29193.309 - 29312.465: 99.0146% ( 3) 00:14:31.108 29312.465 - 29431.622: 99.0412% ( 3) 00:14:31.108 29431.622 - 29550.778: 99.0678% ( 3) 00:14:31.108 29550.778 - 29669.935: 99.0945% ( 3) 00:14:31.108 29669.935 - 29789.091: 99.1122% ( 2) 00:14:31.108 29789.091 - 29908.247: 99.1388% ( 3) 00:14:31.108 29908.247 - 30027.404: 99.1655% ( 3) 00:14:31.108 30027.404 - 30146.560: 99.1832% ( 2) 00:14:31.108 30146.560 - 30265.716: 99.2099% ( 3) 00:14:31.108 30265.716 - 30384.873: 99.2365% ( 3) 00:14:31.108 30384.873 - 30504.029: 99.2631% ( 3) 00:14:31.108 30504.029 - 30742.342: 99.3075% ( 5) 00:14:31.108 30742.342 - 30980.655: 99.3608% ( 6) 00:14:31.108 30980.655 - 31218.967: 99.4141% ( 6) 00:14:31.108 31218.967 - 31457.280: 99.4318% ( 2) 00:14:31.108 36700.160 - 36938.473: 99.4762% ( 5) 00:14:31.108 36938.473 - 37176.785: 99.5206% ( 5) 00:14:31.108 37176.785 - 37415.098: 99.5739% ( 6) 00:14:31.108 37415.098 - 37653.411: 99.6271% ( 6) 00:14:31.108 37653.411 - 37891.724: 99.6715% ( 5) 00:14:31.108 37891.724 - 38130.036: 99.7248% ( 6) 00:14:31.108 38130.036 - 38368.349: 99.7692% ( 5) 00:14:31.108 38368.349 - 38606.662: 99.8136% ( 5) 00:14:31.108 38606.662 - 38844.975: 99.8668% ( 6) 00:14:31.108 38844.975 - 39083.287: 99.9201% ( 6) 00:14:31.108 39083.287 - 39321.600: 99.9734% ( 6) 00:14:31.108 39321.600 - 39559.913: 100.0000% ( 3) 00:14:31.108 00:14:31.108 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:14:31.108 ============================================================================== 00:14:31.108 Range in us Cumulative IO count 00:14:31.108 8698.415 - 8757.993: 0.0088% ( 1) 00:14:31.108 8817.571 - 8877.149: 0.0618% ( 6) 00:14:31.108 8877.149 - 8936.727: 0.2207% ( 18) 00:14:31.108 8936.727 - 8996.305: 0.4326% ( 24) 00:14:31.108 8996.305 - 9055.884: 0.7592% ( 37) 00:14:31.108 9055.884 - 9115.462: 1.2182% ( 52) 00:14:31.108 9115.462 - 9175.040: 1.7214% ( 57) 00:14:31.108 9175.040 - 9234.618: 2.0392% ( 36) 00:14:31.108 9234.618 - 9294.196: 2.4629% ( 48) 00:14:31.108 9294.196 - 9353.775: 2.8513% ( 44) 00:14:31.108 9353.775 - 9413.353: 3.3104% ( 52) 00:14:31.108 9413.353 - 9472.931: 3.9371% ( 71) 00:14:31.108 9472.931 - 9532.509: 4.5286% ( 67) 00:14:31.108 9532.509 - 9592.087: 5.2701% ( 84) 00:14:31.108 9592.087 - 9651.665: 5.5879% ( 36) 00:14:31.108 9651.665 - 9711.244: 5.8528% ( 30) 00:14:31.108 9711.244 - 9770.822: 6.1706% ( 36) 00:14:31.108 9770.822 - 9830.400: 6.6649% ( 56) 00:14:31.108 9830.400 - 9889.978: 7.0621% ( 45) 00:14:31.108 9889.978 - 9949.556: 7.5653% ( 57) 00:14:31.108 9949.556 - 10009.135: 8.3157% ( 85) 00:14:31.108 10009.135 - 10068.713: 9.0660% ( 85) 00:14:31.108 10068.713 - 10128.291: 10.3460% ( 145) 00:14:31.108 10128.291 - 10187.869: 11.5290% ( 134) 00:14:31.108 10187.869 - 10247.447: 12.9149% ( 157) 00:14:31.108 10247.447 - 10307.025: 14.6893% ( 201) 00:14:31.108 10307.025 - 10366.604: 16.6578% ( 223) 00:14:31.108 10366.604 - 10426.182: 18.8383% ( 247) 00:14:31.108 10426.182 - 10485.760: 20.8510% ( 228) 00:14:31.108 10485.760 - 10545.338: 23.1374% ( 259) 00:14:31.108 10545.338 - 10604.916: 26.1653% ( 343) 00:14:31.108 10604.916 - 10664.495: 29.6345% ( 393) 00:14:31.108 10664.495 - 10724.073: 33.2009% ( 404) 00:14:31.108 10724.073 - 10783.651: 36.1229% ( 331) 00:14:31.108 10783.651 - 10843.229: 39.7422% ( 410) 00:14:31.108 10843.229 - 10902.807: 43.7588% ( 455) 00:14:31.108 10902.807 - 10962.385: 47.5547% ( 430) 00:14:31.108 10962.385 - 11021.964: 51.3330% ( 428) 00:14:31.108 11021.964 - 11081.542: 55.1289% ( 430) 00:14:31.108 11081.542 - 11141.120: 58.1568% ( 343) 00:14:31.108 11141.120 - 11200.698: 61.4672% ( 375) 00:14:31.108 11200.698 - 11260.276: 64.1596% ( 305) 00:14:31.108 11260.276 - 11319.855: 67.0374% ( 326) 00:14:31.108 11319.855 - 11379.433: 69.5533% ( 285) 00:14:31.108 11379.433 - 11439.011: 72.0869% ( 287) 00:14:31.108 11439.011 - 11498.589: 74.9647% ( 326) 00:14:31.108 11498.589 - 11558.167: 77.4718% ( 284) 00:14:31.108 11558.167 - 11617.745: 79.5551% ( 236) 00:14:31.108 11617.745 - 11677.324: 81.2412% ( 191) 00:14:31.108 11677.324 - 11736.902: 83.0685% ( 207) 00:14:31.108 11736.902 - 11796.480: 84.9929% ( 218) 00:14:31.108 11796.480 - 11856.058: 86.2465% ( 142) 00:14:31.108 11856.058 - 11915.636: 87.2793% ( 117) 00:14:31.108 11915.636 - 11975.215: 88.3475% ( 121) 00:14:31.108 11975.215 - 12034.793: 89.3362% ( 112) 00:14:31.108 12034.793 - 12094.371: 90.1483% ( 92) 00:14:31.108 12094.371 - 12153.949: 90.8457% ( 79) 00:14:31.108 12153.949 - 12213.527: 91.5166% ( 76) 00:14:31.108 12213.527 - 12273.105: 92.0286% ( 58) 00:14:31.108 12273.105 - 12332.684: 92.4876% ( 52) 00:14:31.108 12332.684 - 12392.262: 92.9290% ( 50) 00:14:31.108 12392.262 - 12451.840: 93.3263% ( 45) 00:14:31.108 12451.840 - 12511.418: 93.7323% ( 46) 00:14:31.108 12511.418 - 12570.996: 94.0943% ( 41) 00:14:31.108 12570.996 - 12630.575: 94.3856% ( 33) 00:14:31.108 12630.575 - 12690.153: 94.7034% ( 36) 00:14:31.108 12690.153 - 12749.731: 95.0653% ( 41) 00:14:31.108 12749.731 - 12809.309: 95.3125% ( 28) 00:14:31.108 12809.309 - 12868.887: 95.5420% ( 26) 00:14:31.108 12868.887 - 12928.465: 95.8775% ( 38) 00:14:31.108 12928.465 - 12988.044: 96.0982% ( 25) 00:14:31.108 12988.044 - 13047.622: 96.3189% ( 25) 00:14:31.108 13047.622 - 13107.200: 96.4336% ( 13) 00:14:31.108 13107.200 - 13166.778: 96.6013% ( 19) 00:14:31.108 13166.778 - 13226.356: 96.7249% ( 14) 00:14:31.108 13226.356 - 13285.935: 96.8662% ( 16) 00:14:31.108 13285.935 - 13345.513: 96.9544% ( 10) 00:14:31.108 13345.513 - 13405.091: 96.9898% ( 4) 00:14:31.108 13405.091 - 13464.669: 97.0162% ( 3) 00:14:31.108 13464.669 - 13524.247: 97.0427% ( 3) 00:14:31.108 13524.247 - 13583.825: 97.0604% ( 2) 00:14:31.108 13583.825 - 13643.404: 97.0780% ( 2) 00:14:31.108 13643.404 - 13702.982: 97.0957% ( 2) 00:14:31.108 13702.982 - 13762.560: 97.1133% ( 2) 00:14:31.108 13762.560 - 13822.138: 97.1310% ( 2) 00:14:31.108 13822.138 - 13881.716: 97.1575% ( 3) 00:14:31.108 13881.716 - 13941.295: 97.1751% ( 2) 00:14:31.109 13941.295 - 14000.873: 97.1840% ( 1) 00:14:31.109 14000.873 - 14060.451: 97.2105% ( 3) 00:14:31.109 14060.451 - 14120.029: 97.2193% ( 1) 00:14:31.109 14120.029 - 14179.607: 97.2458% ( 3) 00:14:31.109 14179.607 - 14239.185: 97.2634% ( 2) 00:14:31.109 14239.185 - 14298.764: 97.2722% ( 1) 00:14:31.109 14298.764 - 14358.342: 97.3782% ( 12) 00:14:31.109 14358.342 - 14417.920: 97.4576% ( 9) 00:14:31.109 14417.920 - 14477.498: 97.5989% ( 16) 00:14:31.109 14477.498 - 14537.076: 97.7489% ( 17) 00:14:31.109 14537.076 - 14596.655: 97.9167% ( 19) 00:14:31.109 14596.655 - 14656.233: 98.1992% ( 32) 00:14:31.109 14656.233 - 14715.811: 98.3227% ( 14) 00:14:31.109 14715.811 - 14775.389: 98.4905% ( 19) 00:14:31.109 14775.389 - 14834.967: 98.6405% ( 17) 00:14:31.109 14834.967 - 14894.545: 98.7200% ( 9) 00:14:31.109 14894.545 - 14954.124: 98.7906% ( 8) 00:14:31.109 14954.124 - 15013.702: 98.8347% ( 5) 00:14:31.109 15013.702 - 15073.280: 98.8524% ( 2) 00:14:31.109 15073.280 - 15132.858: 98.8701% ( 2) 00:14:31.109 19303.331 - 19422.487: 98.8877% ( 2) 00:14:31.109 19422.487 - 19541.644: 98.9054% ( 2) 00:14:31.109 19541.644 - 19660.800: 98.9319% ( 3) 00:14:31.109 19660.800 - 19779.956: 98.9583% ( 3) 00:14:31.109 19779.956 - 19899.113: 98.9848% ( 3) 00:14:31.109 19899.113 - 20018.269: 99.0113% ( 3) 00:14:31.109 20018.269 - 20137.425: 99.0290% ( 2) 00:14:31.109 20137.425 - 20256.582: 99.0554% ( 3) 00:14:31.109 20256.582 - 20375.738: 99.0819% ( 3) 00:14:31.109 20375.738 - 20494.895: 99.1084% ( 3) 00:14:31.109 20494.895 - 20614.051: 99.1349% ( 3) 00:14:31.109 20614.051 - 20733.207: 99.1614% ( 3) 00:14:31.109 20733.207 - 20852.364: 99.1879% ( 3) 00:14:31.109 20852.364 - 20971.520: 99.2055% ( 2) 00:14:31.109 20971.520 - 21090.676: 99.2320% ( 3) 00:14:31.109 21090.676 - 21209.833: 99.2585% ( 3) 00:14:31.109 21209.833 - 21328.989: 99.2850% ( 3) 00:14:31.109 21328.989 - 21448.145: 99.3114% ( 3) 00:14:31.109 21448.145 - 21567.302: 99.3379% ( 3) 00:14:31.109 21567.302 - 21686.458: 99.3644% ( 3) 00:14:31.109 21686.458 - 21805.615: 99.3909% ( 3) 00:14:31.109 21805.615 - 21924.771: 99.4174% ( 3) 00:14:31.109 21924.771 - 22043.927: 99.4350% ( 2) 00:14:31.109 27286.807 - 27405.964: 99.4615% ( 3) 00:14:31.109 27405.964 - 27525.120: 99.4792% ( 2) 00:14:31.109 27525.120 - 27644.276: 99.4968% ( 2) 00:14:31.109 27644.276 - 27763.433: 99.5233% ( 3) 00:14:31.109 27763.433 - 27882.589: 99.5498% ( 3) 00:14:31.109 27882.589 - 28001.745: 99.5763% ( 3) 00:14:31.109 28001.745 - 28120.902: 99.5939% ( 2) 00:14:31.109 28120.902 - 28240.058: 99.6204% ( 3) 00:14:31.109 28240.058 - 28359.215: 99.6469% ( 3) 00:14:31.109 28359.215 - 28478.371: 99.6734% ( 3) 00:14:31.109 28478.371 - 28597.527: 99.6999% ( 3) 00:14:31.109 28597.527 - 28716.684: 99.7175% ( 2) 00:14:31.109 28716.684 - 28835.840: 99.7440% ( 3) 00:14:31.109 28835.840 - 28954.996: 99.7705% ( 3) 00:14:31.109 28954.996 - 29074.153: 99.7970% ( 3) 00:14:31.109 29074.153 - 29193.309: 99.8146% ( 2) 00:14:31.109 29193.309 - 29312.465: 99.8411% ( 3) 00:14:31.109 29312.465 - 29431.622: 99.8676% ( 3) 00:14:31.109 29431.622 - 29550.778: 99.8941% ( 3) 00:14:31.109 29550.778 - 29669.935: 99.9206% ( 3) 00:14:31.109 29669.935 - 29789.091: 99.9470% ( 3) 00:14:31.109 29789.091 - 29908.247: 99.9735% ( 3) 00:14:31.109 29908.247 - 30027.404: 100.0000% ( 3) 00:14:31.109 00:14:31.109 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:14:31.109 ============================================================================== 00:14:31.109 Range in us Cumulative IO count 00:14:31.109 8638.836 - 8698.415: 0.0265% ( 3) 00:14:31.109 8698.415 - 8757.993: 0.1412% ( 13) 00:14:31.109 8757.993 - 8817.571: 0.2295% ( 10) 00:14:31.109 8817.571 - 8877.149: 0.3178% ( 10) 00:14:31.109 8877.149 - 8936.727: 0.6532% ( 38) 00:14:31.109 8936.727 - 8996.305: 1.0946% ( 50) 00:14:31.109 8996.305 - 9055.884: 1.3065% ( 24) 00:14:31.109 9055.884 - 9115.462: 1.7832% ( 54) 00:14:31.109 9115.462 - 9175.040: 2.3217% ( 61) 00:14:31.109 9175.040 - 9234.618: 2.5865% ( 30) 00:14:31.109 9234.618 - 9294.196: 2.8160% ( 26) 00:14:31.109 9294.196 - 9353.775: 3.0809% ( 30) 00:14:31.109 9353.775 - 9413.353: 3.5664% ( 55) 00:14:31.109 9413.353 - 9472.931: 3.9018% ( 38) 00:14:31.109 9472.931 - 9532.509: 4.1843% ( 32) 00:14:31.109 9532.509 - 9592.087: 4.5109% ( 37) 00:14:31.109 9592.087 - 9651.665: 5.0053% ( 56) 00:14:31.109 9651.665 - 9711.244: 5.4820% ( 54) 00:14:31.109 9711.244 - 9770.822: 5.7821% ( 34) 00:14:31.109 9770.822 - 9830.400: 6.1352% ( 40) 00:14:31.109 9830.400 - 9889.978: 6.5501% ( 47) 00:14:31.109 9889.978 - 9949.556: 7.0092% ( 52) 00:14:31.109 9949.556 - 10009.135: 7.9979% ( 112) 00:14:31.109 10009.135 - 10068.713: 8.8895% ( 101) 00:14:31.109 10068.713 - 10128.291: 9.8870% ( 113) 00:14:31.109 10128.291 - 10187.869: 10.8581% ( 110) 00:14:31.109 10187.869 - 10247.447: 13.2150% ( 267) 00:14:31.109 10247.447 - 10307.025: 15.0689% ( 210) 00:14:31.109 10307.025 - 10366.604: 16.9845% ( 217) 00:14:31.109 10366.604 - 10426.182: 19.5268% ( 288) 00:14:31.109 10426.182 - 10485.760: 21.7514% ( 252) 00:14:31.109 10485.760 - 10545.338: 23.8524% ( 238) 00:14:31.109 10545.338 - 10604.916: 26.3683% ( 285) 00:14:31.109 10604.916 - 10664.495: 29.9170% ( 402) 00:14:31.109 10664.495 - 10724.073: 32.5565% ( 299) 00:14:31.109 10724.073 - 10783.651: 36.0523% ( 396) 00:14:31.109 10783.651 - 10843.229: 39.7157% ( 415) 00:14:31.109 10843.229 - 10902.807: 43.1321% ( 387) 00:14:31.109 10902.807 - 10962.385: 46.3806% ( 368) 00:14:31.109 10962.385 - 11021.964: 50.1942% ( 432) 00:14:31.109 11021.964 - 11081.542: 53.9548% ( 426) 00:14:31.109 11081.542 - 11141.120: 57.6095% ( 414) 00:14:31.109 11141.120 - 11200.698: 60.9198% ( 375) 00:14:31.109 11200.698 - 11260.276: 64.0713% ( 357) 00:14:31.109 11260.276 - 11319.855: 66.7108% ( 299) 00:14:31.109 11319.855 - 11379.433: 69.3679% ( 301) 00:14:31.109 11379.433 - 11439.011: 72.8460% ( 394) 00:14:31.109 11439.011 - 11498.589: 75.7945% ( 334) 00:14:31.109 11498.589 - 11558.167: 78.1603% ( 268) 00:14:31.109 11558.167 - 11617.745: 80.0936% ( 219) 00:14:31.109 11617.745 - 11677.324: 82.2122% ( 240) 00:14:31.109 11677.324 - 11736.902: 83.5364% ( 150) 00:14:31.109 11736.902 - 11796.480: 84.9311% ( 158) 00:14:31.109 11796.480 - 11856.058: 86.0169% ( 123) 00:14:31.109 11856.058 - 11915.636: 87.1028% ( 123) 00:14:31.109 11915.636 - 11975.215: 88.0032% ( 102) 00:14:31.109 11975.215 - 12034.793: 88.7094% ( 80) 00:14:31.109 12034.793 - 12094.371: 89.5127% ( 91) 00:14:31.109 12094.371 - 12153.949: 90.0953% ( 66) 00:14:31.109 12153.949 - 12213.527: 90.7927% ( 79) 00:14:31.109 12213.527 - 12273.105: 91.3312% ( 61) 00:14:31.109 12273.105 - 12332.684: 91.7903% ( 52) 00:14:31.109 12332.684 - 12392.262: 92.3994% ( 69) 00:14:31.109 12392.262 - 12451.840: 92.8849% ( 55) 00:14:31.109 12451.840 - 12511.418: 93.5381% ( 74) 00:14:31.109 12511.418 - 12570.996: 94.0325% ( 56) 00:14:31.109 12570.996 - 12630.575: 94.5268% ( 56) 00:14:31.109 12630.575 - 12690.153: 94.9064% ( 43) 00:14:31.109 12690.153 - 12749.731: 95.2419% ( 38) 00:14:31.109 12749.731 - 12809.309: 95.5155% ( 31) 00:14:31.109 12809.309 - 12868.887: 95.7274% ( 24) 00:14:31.109 12868.887 - 12928.465: 95.9922% ( 30) 00:14:31.109 12928.465 - 12988.044: 96.1688% ( 20) 00:14:31.109 12988.044 - 13047.622: 96.3012% ( 15) 00:14:31.109 13047.622 - 13107.200: 96.3895% ( 10) 00:14:31.109 13107.200 - 13166.778: 96.4954% ( 12) 00:14:31.109 13166.778 - 13226.356: 96.6013% ( 12) 00:14:31.110 13226.356 - 13285.935: 96.6808% ( 9) 00:14:31.110 13285.935 - 13345.513: 96.7426% ( 7) 00:14:31.110 13345.513 - 13405.091: 96.8662% ( 14) 00:14:31.110 13405.091 - 13464.669: 96.9809% ( 13) 00:14:31.110 13464.669 - 13524.247: 97.0251% ( 5) 00:14:31.110 13524.247 - 13583.825: 97.0957% ( 8) 00:14:31.110 13583.825 - 13643.404: 97.1222% ( 3) 00:14:31.110 13643.404 - 13702.982: 97.1398% ( 2) 00:14:31.110 13702.982 - 13762.560: 97.1575% ( 2) 00:14:31.110 13762.560 - 13822.138: 97.1751% ( 2) 00:14:31.110 13822.138 - 13881.716: 97.1840% ( 1) 00:14:31.110 13881.716 - 13941.295: 97.2193% ( 4) 00:14:31.110 13941.295 - 14000.873: 97.2369% ( 2) 00:14:31.110 14000.873 - 14060.451: 97.2722% ( 4) 00:14:31.110 14060.451 - 14120.029: 97.3164% ( 5) 00:14:31.110 14120.029 - 14179.607: 97.3429% ( 3) 00:14:31.110 14179.607 - 14239.185: 97.3782% ( 4) 00:14:31.110 14239.185 - 14298.764: 97.4047% ( 3) 00:14:31.110 14298.764 - 14358.342: 97.4311% ( 3) 00:14:31.110 14358.342 - 14417.920: 97.5018% ( 8) 00:14:31.110 14417.920 - 14477.498: 97.5812% ( 9) 00:14:31.110 14477.498 - 14537.076: 97.6254% ( 5) 00:14:31.110 14537.076 - 14596.655: 97.7048% ( 9) 00:14:31.110 14596.655 - 14656.233: 97.7843% ( 9) 00:14:31.110 14656.233 - 14715.811: 97.9520% ( 19) 00:14:31.110 14715.811 - 14775.389: 98.1815% ( 26) 00:14:31.110 14775.389 - 14834.967: 98.2786% ( 11) 00:14:31.110 14834.967 - 14894.545: 98.3404% ( 7) 00:14:31.110 14894.545 - 14954.124: 98.4022% ( 7) 00:14:31.110 14954.124 - 15013.702: 98.4552% ( 6) 00:14:31.110 15013.702 - 15073.280: 98.4728% ( 2) 00:14:31.110 15073.280 - 15132.858: 98.4993% ( 3) 00:14:31.110 15132.858 - 15192.436: 98.5258% ( 3) 00:14:31.110 15192.436 - 15252.015: 98.5876% ( 7) 00:14:31.110 15252.015 - 15371.171: 98.7023% ( 13) 00:14:31.110 15371.171 - 15490.327: 98.8436% ( 16) 00:14:31.110 15490.327 - 15609.484: 98.8701% ( 3) 00:14:31.110 16920.204 - 17039.360: 98.9760% ( 12) 00:14:31.110 17039.360 - 17158.516: 99.0731% ( 11) 00:14:31.110 17158.516 - 17277.673: 99.0996% ( 3) 00:14:31.110 17277.673 - 17396.829: 99.1172% ( 2) 00:14:31.110 17396.829 - 17515.985: 99.1349% ( 2) 00:14:31.110 17515.985 - 17635.142: 99.1614% ( 3) 00:14:31.110 17635.142 - 17754.298: 99.1879% ( 3) 00:14:31.110 17754.298 - 17873.455: 99.2055% ( 2) 00:14:31.110 17873.455 - 17992.611: 99.2320% ( 3) 00:14:31.110 17992.611 - 18111.767: 99.2408% ( 1) 00:14:31.110 18111.767 - 18230.924: 99.2673% ( 3) 00:14:31.110 18230.924 - 18350.080: 99.2850% ( 2) 00:14:31.110 18350.080 - 18469.236: 99.3114% ( 3) 00:14:31.110 18469.236 - 18588.393: 99.3291% ( 2) 00:14:31.110 18588.393 - 18707.549: 99.3556% ( 3) 00:14:31.110 18707.549 - 18826.705: 99.3732% ( 2) 00:14:31.110 18826.705 - 18945.862: 99.3909% ( 2) 00:14:31.110 18945.862 - 19065.018: 99.4174% ( 3) 00:14:31.110 19065.018 - 19184.175: 99.4350% ( 2) 00:14:31.110 22282.240 - 22401.396: 99.4615% ( 3) 00:14:31.110 22401.396 - 22520.553: 99.5233% ( 7) 00:14:31.110 22520.553 - 22639.709: 99.5410% ( 2) 00:14:31.110 24427.055 - 24546.211: 99.5674% ( 3) 00:14:31.110 24546.211 - 24665.367: 99.5851% ( 2) 00:14:31.110 24665.367 - 24784.524: 99.6028% ( 2) 00:14:31.110 24784.524 - 24903.680: 99.6292% ( 3) 00:14:31.110 24903.680 - 25022.836: 99.6557% ( 3) 00:14:31.110 25022.836 - 25141.993: 99.6734% ( 2) 00:14:31.110 25141.993 - 25261.149: 99.6999% ( 3) 00:14:31.110 25261.149 - 25380.305: 99.7175% ( 2) 00:14:31.110 25380.305 - 25499.462: 99.7440% ( 3) 00:14:31.110 25499.462 - 25618.618: 99.7705% ( 3) 00:14:31.110 25618.618 - 25737.775: 99.7881% ( 2) 00:14:31.110 25737.775 - 25856.931: 99.8058% ( 2) 00:14:31.110 25856.931 - 25976.087: 99.8323% ( 3) 00:14:31.110 25976.087 - 26095.244: 99.8588% ( 3) 00:14:31.110 26095.244 - 26214.400: 99.8764% ( 2) 00:14:31.110 26214.400 - 26333.556: 99.9029% ( 3) 00:14:31.110 26333.556 - 26452.713: 99.9294% ( 3) 00:14:31.110 26452.713 - 26571.869: 99.9470% ( 2) 00:14:31.110 26571.869 - 26691.025: 99.9735% ( 3) 00:14:31.110 26691.025 - 26810.182: 100.0000% ( 3) 00:14:31.110 00:14:31.110 10:06:01 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:14:31.110 00:14:31.110 real 0m3.010s 00:14:31.110 user 0m2.566s 00:14:31.110 sys 0m0.324s 00:14:31.110 ************************************ 00:14:31.110 END TEST nvme_perf 00:14:31.110 ************************************ 00:14:31.110 10:06:01 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:31.110 10:06:01 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:14:31.110 10:06:01 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:14:31.110 10:06:01 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:31.110 10:06:01 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:31.110 10:06:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:31.110 ************************************ 00:14:31.110 START TEST nvme_hello_world 00:14:31.110 ************************************ 00:14:31.110 10:06:01 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:14:31.369 Initializing NVMe Controllers 00:14:31.369 Attached to 0000:00:10.0 00:14:31.369 Namespace ID: 1 size: 6GB 00:14:31.369 Attached to 0000:00:11.0 00:14:31.369 Namespace ID: 1 size: 5GB 00:14:31.369 Attached to 0000:00:13.0 00:14:31.369 Namespace ID: 1 size: 1GB 00:14:31.369 Attached to 0000:00:12.0 00:14:31.369 Namespace ID: 1 size: 4GB 00:14:31.369 Namespace ID: 2 size: 4GB 00:14:31.369 Namespace ID: 3 size: 4GB 00:14:31.369 Initialization complete. 00:14:31.369 INFO: using host memory buffer for IO 00:14:31.369 Hello world! 00:14:31.369 INFO: using host memory buffer for IO 00:14:31.369 Hello world! 00:14:31.369 INFO: using host memory buffer for IO 00:14:31.369 Hello world! 00:14:31.369 INFO: using host memory buffer for IO 00:14:31.369 Hello world! 00:14:31.369 INFO: using host memory buffer for IO 00:14:31.369 Hello world! 00:14:31.369 INFO: using host memory buffer for IO 00:14:31.369 Hello world! 00:14:31.628 00:14:31.628 real 0m0.470s 00:14:31.628 user 0m0.265s 00:14:31.628 sys 0m0.161s 00:14:31.628 ************************************ 00:14:31.628 END TEST nvme_hello_world 00:14:31.628 ************************************ 00:14:31.628 10:06:02 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:31.628 10:06:02 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:14:31.628 10:06:02 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:14:31.628 10:06:02 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:31.628 10:06:02 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:31.628 10:06:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:31.628 ************************************ 00:14:31.628 START TEST nvme_sgl 00:14:31.628 ************************************ 00:14:31.628 10:06:02 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:14:31.887 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:14:31.887 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:14:31.887 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:14:32.146 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:14:32.146 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:14:32.146 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:14:32.146 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:14:32.146 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:14:32.146 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:14:32.146 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:14:32.146 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:14:32.146 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:14:32.146 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:14:32.146 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:14:32.146 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:14:32.146 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:14:32.146 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:14:32.146 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:14:32.146 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:14:32.146 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:14:32.146 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:14:32.146 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:14:32.146 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:14:32.146 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:14:32.146 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:14:32.146 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:14:32.146 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:14:32.146 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:14:32.146 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:14:32.146 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:14:32.146 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:14:32.146 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:14:32.146 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:14:32.146 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:14:32.146 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:14:32.146 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:14:32.146 NVMe Readv/Writev Request test 00:14:32.146 Attached to 0000:00:10.0 00:14:32.146 Attached to 0000:00:11.0 00:14:32.146 Attached to 0000:00:13.0 00:14:32.146 Attached to 0000:00:12.0 00:14:32.146 0000:00:10.0: build_io_request_2 test passed 00:14:32.146 0000:00:10.0: build_io_request_4 test passed 00:14:32.146 0000:00:10.0: build_io_request_5 test passed 00:14:32.146 0000:00:10.0: build_io_request_6 test passed 00:14:32.147 0000:00:10.0: build_io_request_7 test passed 00:14:32.147 0000:00:10.0: build_io_request_10 test passed 00:14:32.147 0000:00:11.0: build_io_request_2 test passed 00:14:32.147 0000:00:11.0: build_io_request_4 test passed 00:14:32.147 0000:00:11.0: build_io_request_5 test passed 00:14:32.147 0000:00:11.0: build_io_request_6 test passed 00:14:32.147 0000:00:11.0: build_io_request_7 test passed 00:14:32.147 0000:00:11.0: build_io_request_10 test passed 00:14:32.147 Cleaning up... 00:14:32.147 ************************************ 00:14:32.147 END TEST nvme_sgl 00:14:32.147 ************************************ 00:14:32.147 00:14:32.147 real 0m0.448s 00:14:32.147 user 0m0.220s 00:14:32.147 sys 0m0.179s 00:14:32.147 10:06:02 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:32.147 10:06:02 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:14:32.147 10:06:02 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:14:32.147 10:06:02 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:32.147 10:06:02 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:32.147 10:06:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:32.147 ************************************ 00:14:32.147 START TEST nvme_e2edp 00:14:32.147 ************************************ 00:14:32.147 10:06:02 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:14:32.405 NVMe Write/Read with End-to-End data protection test 00:14:32.405 Attached to 0000:00:10.0 00:14:32.405 Attached to 0000:00:11.0 00:14:32.405 Attached to 0000:00:13.0 00:14:32.405 Attached to 0000:00:12.0 00:14:32.405 Cleaning up... 00:14:32.405 00:14:32.405 real 0m0.347s 00:14:32.405 user 0m0.126s 00:14:32.405 sys 0m0.175s 00:14:32.663 10:06:03 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:32.663 ************************************ 00:14:32.663 END TEST nvme_e2edp 00:14:32.663 ************************************ 00:14:32.663 10:06:03 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:14:32.663 10:06:03 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:14:32.663 10:06:03 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:32.663 10:06:03 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:32.663 10:06:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:32.663 ************************************ 00:14:32.663 START TEST nvme_reserve 00:14:32.663 ************************************ 00:14:32.663 10:06:03 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:14:32.922 ===================================================== 00:14:32.922 NVMe Controller at PCI bus 0, device 16, function 0 00:14:32.922 ===================================================== 00:14:32.922 Reservations: Not Supported 00:14:32.922 ===================================================== 00:14:32.922 NVMe Controller at PCI bus 0, device 17, function 0 00:14:32.922 ===================================================== 00:14:32.922 Reservations: Not Supported 00:14:32.922 ===================================================== 00:14:32.922 NVMe Controller at PCI bus 0, device 19, function 0 00:14:32.922 ===================================================== 00:14:32.922 Reservations: Not Supported 00:14:32.922 ===================================================== 00:14:32.922 NVMe Controller at PCI bus 0, device 18, function 0 00:14:32.922 ===================================================== 00:14:32.922 Reservations: Not Supported 00:14:32.922 Reservation test passed 00:14:32.922 ************************************ 00:14:32.922 END TEST nvme_reserve 00:14:32.922 ************************************ 00:14:32.922 00:14:32.922 real 0m0.340s 00:14:32.922 user 0m0.131s 00:14:32.922 sys 0m0.159s 00:14:32.922 10:06:03 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:32.922 10:06:03 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:14:32.922 10:06:03 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:14:32.922 10:06:03 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:32.922 10:06:03 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:32.922 10:06:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:32.922 ************************************ 00:14:32.922 START TEST nvme_err_injection 00:14:32.922 ************************************ 00:14:32.922 10:06:03 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:14:33.489 NVMe Error Injection test 00:14:33.489 Attached to 0000:00:10.0 00:14:33.489 Attached to 0000:00:11.0 00:14:33.489 Attached to 0000:00:13.0 00:14:33.489 Attached to 0000:00:12.0 00:14:33.489 0000:00:11.0: get features failed as expected 00:14:33.489 0000:00:13.0: get features failed as expected 00:14:33.489 0000:00:12.0: get features failed as expected 00:14:33.489 0000:00:10.0: get features failed as expected 00:14:33.489 0000:00:10.0: get features successfully as expected 00:14:33.489 0000:00:11.0: get features successfully as expected 00:14:33.489 0000:00:13.0: get features successfully as expected 00:14:33.489 0000:00:12.0: get features successfully as expected 00:14:33.489 0000:00:10.0: read failed as expected 00:14:33.489 0000:00:11.0: read failed as expected 00:14:33.490 0000:00:13.0: read failed as expected 00:14:33.490 0000:00:12.0: read failed as expected 00:14:33.490 0000:00:10.0: read successfully as expected 00:14:33.490 0000:00:11.0: read successfully as expected 00:14:33.490 0000:00:13.0: read successfully as expected 00:14:33.490 0000:00:12.0: read successfully as expected 00:14:33.490 Cleaning up... 00:14:33.490 00:14:33.490 real 0m0.353s 00:14:33.490 user 0m0.136s 00:14:33.490 sys 0m0.174s 00:14:33.490 10:06:03 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:33.490 ************************************ 00:14:33.490 END TEST nvme_err_injection 00:14:33.490 ************************************ 00:14:33.490 10:06:03 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:14:33.490 10:06:04 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:14:33.490 10:06:04 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:14:33.490 10:06:04 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:33.490 10:06:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:33.490 ************************************ 00:14:33.508 START TEST nvme_overhead 00:14:33.508 ************************************ 00:14:33.508 10:06:04 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:14:34.883 Initializing NVMe Controllers 00:14:34.883 Attached to 0000:00:10.0 00:14:34.883 Attached to 0000:00:11.0 00:14:34.883 Attached to 0000:00:13.0 00:14:34.883 Attached to 0000:00:12.0 00:14:34.883 Initialization complete. Launching workers. 00:14:34.883 submit (in ns) avg, min, max = 16363.6, 13517.3, 83604.5 00:14:34.883 complete (in ns) avg, min, max = 10889.2, 9340.0, 1141743.6 00:14:34.883 00:14:34.883 Submit histogram 00:14:34.883 ================ 00:14:34.883 Range in us Cumulative Count 00:14:34.883 13.498 - 13.556: 0.0100% ( 1) 00:14:34.883 13.673 - 13.731: 0.0200% ( 1) 00:14:34.883 13.731 - 13.789: 0.0300% ( 1) 00:14:34.883 13.905 - 13.964: 0.0499% ( 2) 00:14:34.883 13.964 - 14.022: 0.0999% ( 5) 00:14:34.883 14.022 - 14.080: 0.3295% ( 23) 00:14:34.883 14.080 - 14.138: 1.3679% ( 104) 00:14:34.883 14.138 - 14.196: 3.3849% ( 202) 00:14:34.883 14.196 - 14.255: 6.1508% ( 277) 00:14:34.883 14.255 - 14.313: 9.0165% ( 287) 00:14:34.883 14.313 - 14.371: 11.7923% ( 278) 00:14:34.883 14.371 - 14.429: 13.9391% ( 215) 00:14:34.883 14.429 - 14.487: 15.5367% ( 160) 00:14:34.883 14.487 - 14.545: 16.7649% ( 123) 00:14:34.883 14.545 - 14.604: 17.4938% ( 73) 00:14:34.883 14.604 - 14.662: 17.8932% ( 40) 00:14:34.883 14.662 - 14.720: 18.2027% ( 31) 00:14:34.883 14.720 - 14.778: 18.3625% ( 16) 00:14:34.883 14.778 - 14.836: 18.5622% ( 20) 00:14:34.883 14.836 - 14.895: 18.9016% ( 34) 00:14:34.883 14.895 - 15.011: 19.7204% ( 82) 00:14:34.883 15.011 - 15.127: 22.2866% ( 257) 00:14:34.883 15.127 - 15.244: 30.1148% ( 784) 00:14:34.883 15.244 - 15.360: 42.2666% ( 1217) 00:14:34.883 15.360 - 15.476: 53.7893% ( 1154) 00:14:34.883 15.476 - 15.593: 60.8587% ( 708) 00:14:34.883 15.593 - 15.709: 65.3620% ( 451) 00:14:34.883 15.709 - 15.825: 67.7384% ( 238) 00:14:34.883 15.825 - 15.942: 69.0664% ( 133) 00:14:34.883 15.942 - 16.058: 69.8952% ( 83) 00:14:34.883 16.058 - 16.175: 70.8437% ( 95) 00:14:34.883 16.175 - 16.291: 72.0919% ( 125) 00:14:34.883 16.291 - 16.407: 73.4099% ( 132) 00:14:34.883 16.407 - 16.524: 74.4084% ( 100) 00:14:34.883 16.524 - 16.640: 74.9775% ( 57) 00:14:34.883 16.640 - 16.756: 75.3270% ( 35) 00:14:34.883 16.756 - 16.873: 75.5067% ( 18) 00:14:34.883 16.873 - 16.989: 75.7164% ( 21) 00:14:34.883 16.989 - 17.105: 76.0659% ( 35) 00:14:34.883 17.105 - 17.222: 77.2741% ( 121) 00:14:34.883 17.222 - 17.338: 79.3709% ( 210) 00:14:34.883 17.338 - 17.455: 80.6990% ( 133) 00:14:34.883 17.455 - 17.571: 81.5976% ( 90) 00:14:34.883 17.571 - 17.687: 82.1468% ( 55) 00:14:34.883 17.687 - 17.804: 82.4863% ( 34) 00:14:34.883 17.804 - 17.920: 82.7359% ( 25) 00:14:34.883 17.920 - 18.036: 83.0554% ( 32) 00:14:34.883 18.036 - 18.153: 83.3150% ( 26) 00:14:34.883 18.153 - 18.269: 83.5547% ( 24) 00:14:34.883 18.269 - 18.385: 84.0639% ( 51) 00:14:34.883 18.385 - 18.502: 85.6415% ( 158) 00:14:34.883 18.502 - 18.618: 87.9980% ( 236) 00:14:34.883 18.618 - 18.735: 89.9850% ( 199) 00:14:34.883 18.735 - 18.851: 90.9036% ( 92) 00:14:34.883 18.851 - 18.967: 91.5726% ( 67) 00:14:34.883 18.967 - 19.084: 91.9820% ( 41) 00:14:34.883 19.084 - 19.200: 92.2816% ( 30) 00:14:34.883 19.200 - 19.316: 92.4613% ( 18) 00:14:34.883 19.316 - 19.433: 92.6610% ( 20) 00:14:34.883 19.433 - 19.549: 92.9406% ( 28) 00:14:34.883 19.549 - 19.665: 93.1802% ( 24) 00:14:34.883 19.665 - 19.782: 93.3600% ( 18) 00:14:34.883 19.782 - 19.898: 93.4698% ( 11) 00:14:34.883 19.898 - 20.015: 93.6395% ( 17) 00:14:34.883 20.015 - 20.131: 93.7194% ( 8) 00:14:34.883 20.131 - 20.247: 93.7693% ( 5) 00:14:34.883 20.247 - 20.364: 93.8392% ( 7) 00:14:34.883 20.364 - 20.480: 93.9191% ( 8) 00:14:34.883 20.480 - 20.596: 94.0489% ( 13) 00:14:34.883 20.596 - 20.713: 94.1288% ( 8) 00:14:34.883 20.713 - 20.829: 94.2786% ( 15) 00:14:34.883 20.829 - 20.945: 94.4284% ( 15) 00:14:34.883 20.945 - 21.062: 94.5681% ( 14) 00:14:34.883 21.062 - 21.178: 94.6780% ( 11) 00:14:34.883 21.178 - 21.295: 94.7778% ( 10) 00:14:34.883 21.295 - 21.411: 94.9176% ( 14) 00:14:34.883 21.411 - 21.527: 95.0175% ( 10) 00:14:34.883 21.527 - 21.644: 95.0774% ( 6) 00:14:34.883 21.644 - 21.760: 95.2172% ( 14) 00:14:34.883 21.760 - 21.876: 95.3470% ( 13) 00:14:34.883 21.876 - 21.993: 95.4868% ( 14) 00:14:34.883 21.993 - 22.109: 95.6066% ( 12) 00:14:34.883 22.109 - 22.225: 95.7564% ( 15) 00:14:34.883 22.225 - 22.342: 95.8962% ( 14) 00:14:34.883 22.342 - 22.458: 95.9661% ( 7) 00:14:34.883 22.458 - 22.575: 96.0260% ( 6) 00:14:34.883 22.575 - 22.691: 96.1058% ( 8) 00:14:34.883 22.691 - 22.807: 96.1857% ( 8) 00:14:34.883 22.807 - 22.924: 96.2356% ( 5) 00:14:34.883 22.924 - 23.040: 96.3055% ( 7) 00:14:34.883 23.040 - 23.156: 96.3854% ( 8) 00:14:34.883 23.156 - 23.273: 96.4453% ( 6) 00:14:34.883 23.273 - 23.389: 96.5352% ( 9) 00:14:34.883 23.389 - 23.505: 96.6251% ( 9) 00:14:34.883 23.505 - 23.622: 96.6750% ( 5) 00:14:34.883 23.622 - 23.738: 96.7449% ( 7) 00:14:34.883 23.738 - 23.855: 96.7948% ( 5) 00:14:34.883 23.855 - 23.971: 96.9146% ( 12) 00:14:34.883 23.971 - 24.087: 97.0045% ( 9) 00:14:34.883 24.087 - 24.204: 97.0944% ( 9) 00:14:34.883 24.204 - 24.320: 97.2042% ( 11) 00:14:34.883 24.320 - 24.436: 97.3040% ( 10) 00:14:34.883 24.436 - 24.553: 97.4039% ( 10) 00:14:34.883 24.553 - 24.669: 97.5437% ( 14) 00:14:34.883 24.669 - 24.785: 97.6335% ( 9) 00:14:34.883 24.785 - 24.902: 97.6935% ( 6) 00:14:34.883 24.902 - 25.018: 97.8033% ( 11) 00:14:34.883 25.018 - 25.135: 97.8632% ( 6) 00:14:34.883 25.135 - 25.251: 97.9830% ( 12) 00:14:34.883 25.251 - 25.367: 98.1028% ( 12) 00:14:34.883 25.367 - 25.484: 98.1628% ( 6) 00:14:34.883 25.484 - 25.600: 98.2227% ( 6) 00:14:34.883 25.600 - 25.716: 98.3025% ( 8) 00:14:34.883 25.716 - 25.833: 98.3425% ( 4) 00:14:34.883 25.833 - 25.949: 98.4324% ( 9) 00:14:34.883 25.949 - 26.065: 98.4623% ( 3) 00:14:34.883 26.065 - 26.182: 98.5222% ( 6) 00:14:34.883 26.182 - 26.298: 98.5721% ( 5) 00:14:34.883 26.298 - 26.415: 98.5921% ( 2) 00:14:34.883 26.415 - 26.531: 98.6321% ( 4) 00:14:34.883 26.531 - 26.647: 98.6820% ( 5) 00:14:34.883 26.647 - 26.764: 98.7519% ( 7) 00:14:34.883 26.764 - 26.880: 98.8118% ( 6) 00:14:34.883 26.880 - 26.996: 98.8617% ( 5) 00:14:34.883 26.996 - 27.113: 98.9116% ( 5) 00:14:34.883 27.113 - 27.229: 98.9616% ( 5) 00:14:34.883 27.229 - 27.345: 99.0115% ( 5) 00:14:34.883 27.462 - 27.578: 99.0315% ( 2) 00:14:34.883 27.578 - 27.695: 99.0714% ( 4) 00:14:34.883 27.695 - 27.811: 99.0814% ( 1) 00:14:34.883 27.811 - 27.927: 99.1113% ( 3) 00:14:34.883 27.927 - 28.044: 99.1513% ( 4) 00:14:34.883 28.044 - 28.160: 99.1712% ( 2) 00:14:34.883 28.160 - 28.276: 99.2012% ( 3) 00:14:34.883 28.276 - 28.393: 99.2112% ( 1) 00:14:34.883 28.393 - 28.509: 99.2212% ( 1) 00:14:34.883 28.509 - 28.625: 99.2511% ( 3) 00:14:34.883 28.625 - 28.742: 99.2811% ( 3) 00:14:34.883 28.742 - 28.858: 99.3010% ( 2) 00:14:34.883 28.858 - 28.975: 99.3110% ( 1) 00:14:34.883 28.975 - 29.091: 99.3410% ( 3) 00:14:34.883 29.091 - 29.207: 99.3510% ( 1) 00:14:34.883 29.207 - 29.324: 99.3909% ( 4) 00:14:34.883 29.440 - 29.556: 99.4009% ( 1) 00:14:34.883 29.556 - 29.673: 99.4408% ( 4) 00:14:34.883 29.673 - 29.789: 99.4508% ( 1) 00:14:34.883 29.789 - 30.022: 99.4608% ( 1) 00:14:34.883 30.022 - 30.255: 99.5007% ( 4) 00:14:34.883 30.255 - 30.487: 99.5107% ( 1) 00:14:34.883 30.487 - 30.720: 99.5507% ( 4) 00:14:34.883 30.720 - 30.953: 99.5806% ( 3) 00:14:34.883 30.953 - 31.185: 99.5906% ( 1) 00:14:34.884 31.185 - 31.418: 99.6006% ( 1) 00:14:34.884 31.418 - 31.651: 99.6106% ( 1) 00:14:34.884 31.884 - 32.116: 99.6405% ( 3) 00:14:34.884 32.116 - 32.349: 99.6505% ( 1) 00:14:34.884 32.582 - 32.815: 99.6605% ( 1) 00:14:34.884 32.815 - 33.047: 99.6705% ( 1) 00:14:34.884 33.280 - 33.513: 99.7104% ( 4) 00:14:34.884 33.513 - 33.745: 99.7304% ( 2) 00:14:34.884 33.978 - 34.211: 99.7504% ( 2) 00:14:34.884 34.444 - 34.676: 99.7604% ( 1) 00:14:34.884 35.142 - 35.375: 99.7703% ( 1) 00:14:34.884 35.840 - 36.073: 99.7803% ( 1) 00:14:34.884 36.305 - 36.538: 99.7903% ( 1) 00:14:34.884 36.538 - 36.771: 99.8003% ( 1) 00:14:34.884 36.771 - 37.004: 99.8103% ( 1) 00:14:34.884 37.469 - 37.702: 99.8203% ( 1) 00:14:34.884 38.400 - 38.633: 99.8303% ( 1) 00:14:34.884 38.865 - 39.098: 99.8402% ( 1) 00:14:34.884 39.564 - 39.796: 99.8502% ( 1) 00:14:34.884 41.193 - 41.425: 99.8602% ( 1) 00:14:34.884 41.891 - 42.124: 99.8702% ( 1) 00:14:34.884 43.985 - 44.218: 99.8802% ( 1) 00:14:34.884 44.684 - 44.916: 99.8902% ( 1) 00:14:34.884 45.847 - 46.080: 99.9001% ( 1) 00:14:34.884 47.011 - 47.244: 99.9101% ( 1) 00:14:34.884 47.709 - 47.942: 99.9301% ( 2) 00:14:34.884 48.175 - 48.407: 99.9401% ( 1) 00:14:34.884 51.200 - 51.433: 99.9501% ( 1) 00:14:34.884 52.131 - 52.364: 99.9601% ( 1) 00:14:34.884 52.364 - 52.596: 99.9700% ( 1) 00:14:34.884 59.578 - 60.044: 99.9800% ( 1) 00:14:34.884 67.491 - 67.956: 99.9900% ( 1) 00:14:34.884 83.316 - 83.782: 100.0000% ( 1) 00:14:34.884 00:14:34.884 Complete histogram 00:14:34.884 ================== 00:14:34.884 Range in us Cumulative Count 00:14:34.884 9.309 - 9.367: 0.0499% ( 5) 00:14:34.884 9.367 - 9.425: 0.2796% ( 23) 00:14:34.884 9.425 - 9.484: 1.7873% ( 151) 00:14:34.884 9.484 - 9.542: 7.0095% ( 523) 00:14:34.884 9.542 - 9.600: 15.8063% ( 881) 00:14:34.884 9.600 - 9.658: 25.9211% ( 1013) 00:14:34.884 9.658 - 9.716: 35.1373% ( 923) 00:14:34.884 9.716 - 9.775: 41.2182% ( 609) 00:14:34.884 9.775 - 9.833: 45.2322% ( 402) 00:14:34.884 9.833 - 9.891: 47.9880% ( 276) 00:14:34.884 9.891 - 9.949: 51.0634% ( 308) 00:14:34.884 9.949 - 10.007: 56.1857% ( 513) 00:14:34.884 10.007 - 10.065: 61.2781% ( 510) 00:14:34.884 10.065 - 10.124: 64.9725% ( 370) 00:14:34.884 10.124 - 10.182: 67.3390% ( 237) 00:14:34.884 10.182 - 10.240: 69.0864% ( 175) 00:14:34.884 10.240 - 10.298: 69.9750% ( 89) 00:14:34.884 10.298 - 10.356: 70.5642% ( 59) 00:14:34.884 10.356 - 10.415: 70.8238% ( 26) 00:14:34.884 10.415 - 10.473: 71.0534% ( 23) 00:14:34.884 10.473 - 10.531: 71.3430% ( 29) 00:14:34.884 10.531 - 10.589: 71.6925% ( 35) 00:14:34.884 10.589 - 10.647: 72.1118% ( 42) 00:14:34.884 10.647 - 10.705: 72.5612% ( 45) 00:14:34.884 10.705 - 10.764: 73.1403% ( 58) 00:14:34.884 10.764 - 10.822: 73.7494% ( 61) 00:14:34.884 10.822 - 10.880: 74.3285% ( 58) 00:14:34.884 10.880 - 10.938: 75.0175% ( 69) 00:14:34.884 10.938 - 10.996: 75.4968% ( 48) 00:14:34.884 10.996 - 11.055: 75.9461% ( 45) 00:14:34.884 11.055 - 11.113: 76.3555% ( 41) 00:14:34.884 11.113 - 11.171: 76.5652% ( 21) 00:14:34.884 11.171 - 11.229: 76.8447% ( 28) 00:14:34.884 11.229 - 11.287: 77.0444% ( 20) 00:14:34.884 11.287 - 11.345: 77.2042% ( 16) 00:14:34.884 11.345 - 11.404: 77.2941% ( 9) 00:14:34.884 11.404 - 11.462: 77.3440% ( 5) 00:14:34.884 11.462 - 11.520: 77.3839% ( 4) 00:14:34.884 11.520 - 11.578: 77.4438% ( 6) 00:14:34.884 11.578 - 11.636: 77.4538% ( 1) 00:14:34.884 11.636 - 11.695: 77.4838% ( 3) 00:14:34.884 11.695 - 11.753: 77.5337% ( 5) 00:14:34.884 11.753 - 11.811: 77.8333% ( 30) 00:14:34.884 11.811 - 11.869: 78.7818% ( 95) 00:14:34.884 11.869 - 11.927: 80.0799% ( 130) 00:14:34.884 11.927 - 11.985: 81.5177% ( 144) 00:14:34.884 11.985 - 12.044: 83.0454% ( 153) 00:14:34.884 12.044 - 12.102: 84.3135% ( 127) 00:14:34.884 12.102 - 12.160: 85.3020% ( 99) 00:14:34.884 12.160 - 12.218: 86.4403% ( 114) 00:14:34.884 12.218 - 12.276: 87.6885% ( 125) 00:14:34.884 12.276 - 12.335: 88.8767% ( 119) 00:14:34.884 12.335 - 12.393: 89.8253% ( 95) 00:14:34.884 12.393 - 12.451: 90.6540% ( 83) 00:14:34.884 12.451 - 12.509: 91.3030% ( 65) 00:14:34.884 12.509 - 12.567: 91.6026% ( 30) 00:14:34.884 12.567 - 12.625: 91.9021% ( 30) 00:14:34.884 12.625 - 12.684: 92.0819% ( 18) 00:14:34.884 12.684 - 12.742: 92.2217% ( 14) 00:14:34.884 12.742 - 12.800: 92.3515% ( 13) 00:14:34.884 12.800 - 12.858: 92.3714% ( 2) 00:14:34.884 12.858 - 12.916: 92.5012% ( 13) 00:14:34.884 12.916 - 12.975: 92.7209% ( 22) 00:14:34.884 12.975 - 13.033: 92.8507% ( 13) 00:14:34.884 13.033 - 13.091: 93.0804% ( 23) 00:14:34.884 13.091 - 13.149: 93.3000% ( 22) 00:14:34.884 13.149 - 13.207: 93.4498% ( 15) 00:14:34.884 13.207 - 13.265: 93.5696% ( 12) 00:14:34.884 13.265 - 13.324: 93.7494% ( 18) 00:14:34.884 13.324 - 13.382: 93.8992% ( 15) 00:14:34.884 13.382 - 13.440: 94.1687% ( 27) 00:14:34.884 13.440 - 13.498: 94.3085% ( 14) 00:14:34.884 13.498 - 13.556: 94.4583% ( 15) 00:14:34.884 13.556 - 13.615: 94.6580% ( 20) 00:14:34.884 13.615 - 13.673: 94.8278% ( 17) 00:14:34.884 13.673 - 13.731: 94.9775% ( 15) 00:14:34.884 13.731 - 13.789: 95.0474% ( 7) 00:14:34.884 13.789 - 13.847: 95.0774% ( 3) 00:14:34.884 13.847 - 13.905: 95.1173% ( 4) 00:14:34.884 13.905 - 13.964: 95.2371% ( 12) 00:14:34.884 13.964 - 14.022: 95.2871% ( 5) 00:14:34.884 14.022 - 14.080: 95.3270% ( 4) 00:14:34.884 14.080 - 14.138: 95.3769% ( 5) 00:14:34.884 14.138 - 14.196: 95.4069% ( 3) 00:14:34.884 14.196 - 14.255: 95.4468% ( 4) 00:14:34.884 14.255 - 14.313: 95.4568% ( 1) 00:14:34.884 14.313 - 14.371: 95.4868% ( 3) 00:14:34.884 14.371 - 14.429: 95.5167% ( 3) 00:14:34.884 14.429 - 14.487: 95.5567% ( 4) 00:14:34.884 14.487 - 14.545: 95.5667% ( 1) 00:14:34.884 14.545 - 14.604: 95.5766% ( 1) 00:14:34.884 14.662 - 14.720: 95.5866% ( 1) 00:14:34.884 14.720 - 14.778: 95.6365% ( 5) 00:14:34.884 14.778 - 14.836: 95.6765% ( 4) 00:14:34.884 14.836 - 14.895: 95.6965% ( 2) 00:14:34.884 14.895 - 15.011: 95.7464% ( 5) 00:14:34.884 15.011 - 15.127: 95.8362% ( 9) 00:14:34.884 15.127 - 15.244: 95.8762% ( 4) 00:14:34.884 15.244 - 15.360: 95.9161% ( 4) 00:14:34.884 15.360 - 15.476: 96.0060% ( 9) 00:14:34.884 15.476 - 15.593: 96.0260% ( 2) 00:14:34.884 15.593 - 15.709: 96.1158% ( 9) 00:14:34.884 15.709 - 15.825: 96.1857% ( 7) 00:14:34.884 15.825 - 15.942: 96.2257% ( 4) 00:14:34.884 15.942 - 16.058: 96.3255% ( 10) 00:14:34.884 16.058 - 16.175: 96.3854% ( 6) 00:14:34.884 16.175 - 16.291: 96.5052% ( 12) 00:14:34.884 16.291 - 16.407: 96.6650% ( 16) 00:14:34.884 16.407 - 16.524: 96.8347% ( 17) 00:14:34.884 16.524 - 16.640: 96.9046% ( 7) 00:14:34.884 16.640 - 16.756: 97.0344% ( 13) 00:14:34.884 16.756 - 16.873: 97.1243% ( 9) 00:14:34.884 16.873 - 16.989: 97.2242% ( 10) 00:14:34.884 16.989 - 17.105: 97.2941% ( 7) 00:14:34.884 17.105 - 17.222: 97.3739% ( 8) 00:14:34.884 17.222 - 17.338: 97.4139% ( 4) 00:14:34.884 17.338 - 17.455: 97.4938% ( 8) 00:14:34.884 17.455 - 17.571: 97.5736% ( 8) 00:14:34.884 17.571 - 17.687: 97.6535% ( 8) 00:14:34.884 17.687 - 17.804: 97.8233% ( 17) 00:14:34.884 17.804 - 17.920: 97.9331% ( 11) 00:14:34.884 17.920 - 18.036: 98.0529% ( 12) 00:14:34.884 18.036 - 18.153: 98.1328% ( 8) 00:14:34.884 18.153 - 18.269: 98.2626% ( 13) 00:14:34.884 18.269 - 18.385: 98.3625% ( 10) 00:14:34.884 18.385 - 18.502: 98.4124% ( 5) 00:14:34.884 18.502 - 18.618: 98.5522% ( 14) 00:14:34.884 18.618 - 18.735: 98.5921% ( 4) 00:14:34.884 18.735 - 18.851: 98.6420% ( 5) 00:14:34.884 18.851 - 18.967: 98.7219% ( 8) 00:14:34.884 18.967 - 19.084: 98.7918% ( 7) 00:14:34.884 19.084 - 19.200: 98.8717% ( 8) 00:14:34.884 19.200 - 19.316: 98.9216% ( 5) 00:14:34.884 19.316 - 19.433: 98.9416% ( 2) 00:14:34.884 19.433 - 19.549: 98.9616% ( 2) 00:14:34.884 19.549 - 19.665: 98.9715% ( 1) 00:14:34.884 19.665 - 19.782: 99.0115% ( 4) 00:14:34.884 19.782 - 19.898: 99.0215% ( 1) 00:14:34.884 19.898 - 20.015: 99.0714% ( 5) 00:14:34.884 20.015 - 20.131: 99.1413% ( 7) 00:14:34.884 20.247 - 20.364: 99.1812% ( 4) 00:14:34.884 20.364 - 20.480: 99.2112% ( 3) 00:14:34.884 20.480 - 20.596: 99.2312% ( 2) 00:14:34.884 20.596 - 20.713: 99.2511% ( 2) 00:14:34.884 20.713 - 20.829: 99.2611% ( 1) 00:14:34.884 20.829 - 20.945: 99.2711% ( 1) 00:14:34.884 20.945 - 21.062: 99.3010% ( 3) 00:14:34.884 21.062 - 21.178: 99.3110% ( 1) 00:14:34.884 21.178 - 21.295: 99.3610% ( 5) 00:14:34.884 21.295 - 21.411: 99.3709% ( 1) 00:14:34.884 21.527 - 21.644: 99.4009% ( 3) 00:14:34.884 21.644 - 21.760: 99.4209% ( 2) 00:14:34.884 21.760 - 21.876: 99.4408% ( 2) 00:14:34.884 21.876 - 21.993: 99.4508% ( 1) 00:14:34.885 21.993 - 22.109: 99.4608% ( 1) 00:14:34.885 22.109 - 22.225: 99.5007% ( 4) 00:14:34.885 22.225 - 22.342: 99.5107% ( 1) 00:14:34.885 22.575 - 22.691: 99.5307% ( 2) 00:14:34.885 22.807 - 22.924: 99.5407% ( 1) 00:14:34.885 22.924 - 23.040: 99.5706% ( 3) 00:14:34.885 23.040 - 23.156: 99.6006% ( 3) 00:14:34.885 23.273 - 23.389: 99.6306% ( 3) 00:14:34.885 23.505 - 23.622: 99.6405% ( 1) 00:14:34.885 23.738 - 23.855: 99.6505% ( 1) 00:14:34.885 23.855 - 23.971: 99.6705% ( 2) 00:14:34.885 23.971 - 24.087: 99.6905% ( 2) 00:14:34.885 24.320 - 24.436: 99.7004% ( 1) 00:14:34.885 24.553 - 24.669: 99.7104% ( 1) 00:14:34.885 24.785 - 24.902: 99.7304% ( 2) 00:14:34.885 24.902 - 25.018: 99.7703% ( 4) 00:14:34.885 25.018 - 25.135: 99.7803% ( 1) 00:14:34.885 25.135 - 25.251: 99.7903% ( 1) 00:14:34.885 25.600 - 25.716: 99.8003% ( 1) 00:14:34.885 25.833 - 25.949: 99.8103% ( 1) 00:14:34.885 26.182 - 26.298: 99.8203% ( 1) 00:14:34.885 26.647 - 26.764: 99.8303% ( 1) 00:14:34.885 26.996 - 27.113: 99.8402% ( 1) 00:14:34.885 27.811 - 27.927: 99.8502% ( 1) 00:14:34.885 28.625 - 28.742: 99.8602% ( 1) 00:14:34.885 28.975 - 29.091: 99.8702% ( 1) 00:14:34.885 29.091 - 29.207: 99.8802% ( 1) 00:14:34.885 29.440 - 29.556: 99.8902% ( 1) 00:14:34.885 32.349 - 32.582: 99.9001% ( 1) 00:14:34.885 34.676 - 34.909: 99.9101% ( 1) 00:14:34.885 35.142 - 35.375: 99.9201% ( 1) 00:14:34.885 43.985 - 44.218: 99.9301% ( 1) 00:14:34.885 51.665 - 51.898: 99.9401% ( 1) 00:14:34.885 81.920 - 82.385: 99.9501% ( 1) 00:14:34.885 88.902 - 89.367: 99.9601% ( 1) 00:14:34.885 110.313 - 110.778: 99.9700% ( 1) 00:14:34.885 132.189 - 133.120: 99.9800% ( 1) 00:14:34.885 310.924 - 312.785: 99.9900% ( 1) 00:14:34.885 1139.433 - 1146.880: 100.0000% ( 1) 00:14:34.885 00:14:34.885 ************************************ 00:14:34.885 END TEST nvme_overhead 00:14:34.885 ************************************ 00:14:34.885 00:14:34.885 real 0m1.336s 00:14:34.885 user 0m1.130s 00:14:34.885 sys 0m0.157s 00:14:34.885 10:06:05 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:34.885 10:06:05 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:14:34.885 10:06:05 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:14:34.885 10:06:05 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:14:34.885 10:06:05 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:34.885 10:06:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:34.885 ************************************ 00:14:34.885 START TEST nvme_arbitration 00:14:34.885 ************************************ 00:14:34.885 10:06:05 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:14:39.071 Initializing NVMe Controllers 00:14:39.071 Attached to 0000:00:10.0 00:14:39.071 Attached to 0000:00:11.0 00:14:39.071 Attached to 0000:00:13.0 00:14:39.071 Attached to 0000:00:12.0 00:14:39.071 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:14:39.072 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:14:39.072 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:14:39.072 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:14:39.072 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:14:39.072 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:14:39.072 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:14:39.072 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:14:39.072 Initialization complete. Launching workers. 00:14:39.072 Starting thread on core 1 with urgent priority queue 00:14:39.072 Starting thread on core 2 with urgent priority queue 00:14:39.072 Starting thread on core 3 with urgent priority queue 00:14:39.072 Starting thread on core 0 with urgent priority queue 00:14:39.072 QEMU NVMe Ctrl (12340 ) core 0: 554.67 IO/s 180.29 secs/100000 ios 00:14:39.072 QEMU NVMe Ctrl (12342 ) core 0: 554.67 IO/s 180.29 secs/100000 ios 00:14:39.072 QEMU NVMe Ctrl (12341 ) core 1: 682.67 IO/s 146.48 secs/100000 ios 00:14:39.072 QEMU NVMe Ctrl (12342 ) core 1: 682.67 IO/s 146.48 secs/100000 ios 00:14:39.072 QEMU NVMe Ctrl (12343 ) core 2: 682.67 IO/s 146.48 secs/100000 ios 00:14:39.072 QEMU NVMe Ctrl (12342 ) core 3: 576.00 IO/s 173.61 secs/100000 ios 00:14:39.072 ======================================================== 00:14:39.072 00:14:39.072 00:14:39.072 real 0m3.598s 00:14:39.072 user 0m9.536s 00:14:39.072 sys 0m0.193s 00:14:39.072 10:06:09 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:39.072 10:06:09 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:14:39.072 ************************************ 00:14:39.072 END TEST nvme_arbitration 00:14:39.072 ************************************ 00:14:39.072 10:06:09 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:14:39.072 10:06:09 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:39.072 10:06:09 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:39.072 10:06:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:39.072 ************************************ 00:14:39.072 START TEST nvme_single_aen 00:14:39.072 ************************************ 00:14:39.072 10:06:09 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:14:39.072 Asynchronous Event Request test 00:14:39.072 Attached to 0000:00:10.0 00:14:39.072 Attached to 0000:00:11.0 00:14:39.072 Attached to 0000:00:13.0 00:14:39.072 Attached to 0000:00:12.0 00:14:39.072 Reset controller to setup AER completions for this process 00:14:39.072 Registering asynchronous event callbacks... 00:14:39.072 Getting orig temperature thresholds of all controllers 00:14:39.072 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:39.072 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:39.072 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:39.072 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:39.072 Setting all controllers temperature threshold low to trigger AER 00:14:39.072 Waiting for all controllers temperature threshold to be set lower 00:14:39.072 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:39.072 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:14:39.072 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:39.072 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:14:39.072 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:39.072 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:14:39.072 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:39.072 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:14:39.072 Waiting for all controllers to trigger AER and reset threshold 00:14:39.072 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:39.072 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:39.072 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:39.072 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:39.072 Cleaning up... 00:14:39.072 00:14:39.072 real 0m0.355s 00:14:39.072 user 0m0.134s 00:14:39.072 sys 0m0.173s 00:14:39.072 ************************************ 00:14:39.072 END TEST nvme_single_aen 00:14:39.072 ************************************ 00:14:39.072 10:06:09 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:39.072 10:06:09 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:14:39.072 10:06:09 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:14:39.072 10:06:09 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:39.072 10:06:09 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:39.072 10:06:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:39.072 ************************************ 00:14:39.072 START TEST nvme_doorbell_aers 00:14:39.072 ************************************ 00:14:39.072 10:06:09 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:14:39.072 10:06:09 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:14:39.072 10:06:09 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:14:39.072 10:06:09 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:14:39.072 10:06:09 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:14:39.072 10:06:09 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:14:39.072 10:06:09 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:14:39.072 10:06:09 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:14:39.072 10:06:09 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:39.072 10:06:09 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:14:39.072 10:06:09 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:14:39.072 10:06:09 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:14:39.072 10:06:09 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:14:39.072 10:06:09 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:14:39.330 [2024-12-09 10:06:09.877153] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65032) is not found. Dropping the request. 00:14:49.300 Executing: test_write_invalid_db 00:14:49.300 Waiting for AER completion... 00:14:49.300 Failure: test_write_invalid_db 00:14:49.300 00:14:49.300 Executing: test_invalid_db_write_overflow_sq 00:14:49.300 Waiting for AER completion... 00:14:49.300 Failure: test_invalid_db_write_overflow_sq 00:14:49.300 00:14:49.300 Executing: test_invalid_db_write_overflow_cq 00:14:49.300 Waiting for AER completion... 00:14:49.300 Failure: test_invalid_db_write_overflow_cq 00:14:49.300 00:14:49.300 10:06:19 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:14:49.300 10:06:19 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:14:49.300 [2024-12-09 10:06:20.027024] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65032) is not found. Dropping the request. 00:14:59.274 Executing: test_write_invalid_db 00:14:59.274 Waiting for AER completion... 00:14:59.274 Failure: test_write_invalid_db 00:14:59.274 00:14:59.274 Executing: test_invalid_db_write_overflow_sq 00:14:59.274 Waiting for AER completion... 00:14:59.274 Failure: test_invalid_db_write_overflow_sq 00:14:59.274 00:14:59.274 Executing: test_invalid_db_write_overflow_cq 00:14:59.274 Waiting for AER completion... 00:14:59.274 Failure: test_invalid_db_write_overflow_cq 00:14:59.274 00:14:59.274 10:06:29 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:14:59.274 10:06:29 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:14:59.531 [2024-12-09 10:06:30.141782] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65032) is not found. Dropping the request. 00:15:09.497 Executing: test_write_invalid_db 00:15:09.497 Waiting for AER completion... 00:15:09.497 Failure: test_write_invalid_db 00:15:09.497 00:15:09.497 Executing: test_invalid_db_write_overflow_sq 00:15:09.497 Waiting for AER completion... 00:15:09.497 Failure: test_invalid_db_write_overflow_sq 00:15:09.497 00:15:09.497 Executing: test_invalid_db_write_overflow_cq 00:15:09.497 Waiting for AER completion... 00:15:09.497 Failure: test_invalid_db_write_overflow_cq 00:15:09.497 00:15:09.497 10:06:39 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:15:09.497 10:06:39 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:15:09.497 [2024-12-09 10:06:40.289953] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65032) is not found. Dropping the request. 00:15:19.464 Executing: test_write_invalid_db 00:15:19.464 Waiting for AER completion... 00:15:19.464 Failure: test_write_invalid_db 00:15:19.464 00:15:19.464 Executing: test_invalid_db_write_overflow_sq 00:15:19.464 Waiting for AER completion... 00:15:19.464 Failure: test_invalid_db_write_overflow_sq 00:15:19.464 00:15:19.464 Executing: test_invalid_db_write_overflow_cq 00:15:19.464 Waiting for AER completion... 00:15:19.464 Failure: test_invalid_db_write_overflow_cq 00:15:19.464 00:15:19.464 ************************************ 00:15:19.464 END TEST nvme_doorbell_aers 00:15:19.464 ************************************ 00:15:19.464 00:15:19.464 real 0m40.615s 00:15:19.464 user 0m34.494s 00:15:19.464 sys 0m5.681s 00:15:19.464 10:06:50 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:19.465 10:06:50 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:15:19.465 10:06:50 nvme -- nvme/nvme.sh@97 -- # uname 00:15:19.465 10:06:50 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:15:19.465 10:06:50 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:15:19.465 10:06:50 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:15:19.465 10:06:50 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:19.465 10:06:50 nvme -- common/autotest_common.sh@10 -- # set +x 00:15:19.465 ************************************ 00:15:19.465 START TEST nvme_multi_aen 00:15:19.465 ************************************ 00:15:19.465 10:06:50 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:15:19.723 [2024-12-09 10:06:50.438287] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65032) is not found. Dropping the request. 00:15:19.723 [2024-12-09 10:06:50.439423] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65032) is not found. Dropping the request. 00:15:19.723 [2024-12-09 10:06:50.439558] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65032) is not found. Dropping the request. 00:15:19.723 [2024-12-09 10:06:50.441436] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65032) is not found. Dropping the request. 00:15:19.723 [2024-12-09 10:06:50.441702] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65032) is not found. Dropping the request. 00:15:19.723 [2024-12-09 10:06:50.441899] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65032) is not found. Dropping the request. 00:15:19.723 [2024-12-09 10:06:50.443590] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65032) is not found. Dropping the request. 00:15:19.723 [2024-12-09 10:06:50.443896] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65032) is not found. Dropping the request. 00:15:19.723 [2024-12-09 10:06:50.444127] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65032) is not found. Dropping the request. 00:15:19.723 [2024-12-09 10:06:50.445874] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65032) is not found. Dropping the request. 00:15:19.723 [2024-12-09 10:06:50.446152] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65032) is not found. Dropping the request. 00:15:19.723 [2024-12-09 10:06:50.446391] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65032) is not found. Dropping the request. 00:15:19.723 Child process pid: 65555 00:15:19.981 [Child] Asynchronous Event Request test 00:15:19.981 [Child] Attached to 0000:00:10.0 00:15:19.981 [Child] Attached to 0000:00:11.0 00:15:19.981 [Child] Attached to 0000:00:13.0 00:15:19.981 [Child] Attached to 0000:00:12.0 00:15:19.981 [Child] Registering asynchronous event callbacks... 00:15:19.981 [Child] Getting orig temperature thresholds of all controllers 00:15:19.981 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:15:19.981 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:15:19.981 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:15:19.981 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:15:19.981 [Child] Waiting for all controllers to trigger AER and reset threshold 00:15:19.981 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:15:19.981 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:15:19.981 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:15:19.981 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:15:19.982 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:15:19.982 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:15:19.982 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:15:19.982 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:15:19.982 [Child] Cleaning up... 00:15:20.240 Asynchronous Event Request test 00:15:20.240 Attached to 0000:00:10.0 00:15:20.240 Attached to 0000:00:11.0 00:15:20.240 Attached to 0000:00:13.0 00:15:20.240 Attached to 0000:00:12.0 00:15:20.240 Reset controller to setup AER completions for this process 00:15:20.240 Registering asynchronous event callbacks... 00:15:20.240 Getting orig temperature thresholds of all controllers 00:15:20.240 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:15:20.240 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:15:20.240 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:15:20.240 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:15:20.240 Setting all controllers temperature threshold low to trigger AER 00:15:20.240 Waiting for all controllers temperature threshold to be set lower 00:15:20.240 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:15:20.240 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:15:20.240 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:15:20.240 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:15:20.240 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:15:20.240 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:15:20.240 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:15:20.240 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:15:20.240 Waiting for all controllers to trigger AER and reset threshold 00:15:20.240 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:15:20.240 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:15:20.240 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:15:20.240 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:15:20.240 Cleaning up... 00:15:20.240 00:15:20.240 real 0m0.690s 00:15:20.240 user 0m0.250s 00:15:20.240 sys 0m0.344s 00:15:20.240 ************************************ 00:15:20.240 END TEST nvme_multi_aen 00:15:20.240 ************************************ 00:15:20.240 10:06:50 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:20.240 10:06:50 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:15:20.240 10:06:50 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:15:20.240 10:06:50 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:20.240 10:06:50 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:20.240 10:06:50 nvme -- common/autotest_common.sh@10 -- # set +x 00:15:20.240 ************************************ 00:15:20.240 START TEST nvme_startup 00:15:20.240 ************************************ 00:15:20.240 10:06:50 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:15:20.507 Initializing NVMe Controllers 00:15:20.507 Attached to 0000:00:10.0 00:15:20.507 Attached to 0000:00:11.0 00:15:20.507 Attached to 0000:00:13.0 00:15:20.507 Attached to 0000:00:12.0 00:15:20.507 Initialization complete. 00:15:20.507 Time used:270055.000 (us). 00:15:20.507 00:15:20.507 real 0m0.391s 00:15:20.507 user 0m0.195s 00:15:20.507 sys 0m0.157s 00:15:20.507 ************************************ 00:15:20.507 END TEST nvme_startup 00:15:20.507 ************************************ 00:15:20.507 10:06:51 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:20.507 10:06:51 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:15:20.766 10:06:51 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:15:20.766 10:06:51 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:20.766 10:06:51 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:20.766 10:06:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:15:20.766 ************************************ 00:15:20.766 START TEST nvme_multi_secondary 00:15:20.766 ************************************ 00:15:20.766 10:06:51 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:15:20.766 10:06:51 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=65611 00:15:20.766 10:06:51 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:15:20.766 10:06:51 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=65612 00:15:20.766 10:06:51 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:15:20.766 10:06:51 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:15:24.049 Initializing NVMe Controllers 00:15:24.049 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:15:24.049 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:15:24.049 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:15:24.049 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:15:24.049 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:15:24.049 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:15:24.049 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:15:24.049 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:15:24.049 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:15:24.049 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:15:24.049 Initialization complete. Launching workers. 00:15:24.049 ======================================================== 00:15:24.049 Latency(us) 00:15:24.049 Device Information : IOPS MiB/s Average min max 00:15:24.049 PCIE (0000:00:10.0) NSID 1 from core 2: 2382.09 9.31 6714.26 1063.51 15220.48 00:15:24.049 PCIE (0000:00:11.0) NSID 1 from core 2: 2382.09 9.31 6716.18 1000.94 16348.74 00:15:24.049 PCIE (0000:00:13.0) NSID 1 from core 2: 2382.09 9.31 6716.12 1084.21 15691.73 00:15:24.049 PCIE (0000:00:12.0) NSID 1 from core 2: 2382.09 9.31 6717.49 1083.80 15512.31 00:15:24.049 PCIE (0000:00:12.0) NSID 2 from core 2: 2382.09 9.31 6717.34 1082.10 16455.18 00:15:24.049 PCIE (0000:00:12.0) NSID 3 from core 2: 2382.09 9.31 6717.23 1088.30 15255.40 00:15:24.049 ======================================================== 00:15:24.049 Total : 14292.51 55.83 6716.44 1000.94 16455.18 00:15:24.049 00:15:24.049 Initializing NVMe Controllers 00:15:24.049 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:15:24.049 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:15:24.049 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:15:24.049 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:15:24.049 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:15:24.049 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:15:24.049 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:15:24.049 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:15:24.049 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:15:24.049 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:15:24.049 Initialization complete. Launching workers. 00:15:24.049 ======================================================== 00:15:24.049 Latency(us) 00:15:24.049 Device Information : IOPS MiB/s Average min max 00:15:24.049 PCIE (0000:00:10.0) NSID 1 from core 1: 5039.03 19.68 3173.16 962.14 10228.35 00:15:24.049 PCIE (0000:00:11.0) NSID 1 from core 1: 5039.03 19.68 3174.70 996.26 10335.58 00:15:24.049 PCIE (0000:00:13.0) NSID 1 from core 1: 5039.03 19.68 3174.83 963.78 8256.62 00:15:24.049 PCIE (0000:00:12.0) NSID 1 from core 1: 5039.03 19.68 3175.01 996.22 7554.52 00:15:24.049 PCIE (0000:00:12.0) NSID 2 from core 1: 5039.03 19.68 3175.15 997.48 8275.10 00:15:24.049 PCIE (0000:00:12.0) NSID 3 from core 1: 5039.03 19.68 3175.90 1003.77 9670.16 00:15:24.049 ======================================================== 00:15:24.049 Total : 30234.17 118.10 3174.79 962.14 10335.58 00:15:24.049 00:15:24.307 10:06:54 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 65611 00:15:26.207 Initializing NVMe Controllers 00:15:26.207 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:15:26.207 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:15:26.207 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:15:26.207 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:15:26.207 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:15:26.207 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:15:26.207 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:15:26.207 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:15:26.207 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:15:26.207 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:15:26.207 Initialization complete. Launching workers. 00:15:26.207 ======================================================== 00:15:26.207 Latency(us) 00:15:26.207 Device Information : IOPS MiB/s Average min max 00:15:26.207 PCIE (0000:00:10.0) NSID 1 from core 0: 8458.76 33.04 1889.77 972.92 8503.85 00:15:26.207 PCIE (0000:00:11.0) NSID 1 from core 0: 8458.76 33.04 1890.89 997.82 8149.28 00:15:26.207 PCIE (0000:00:13.0) NSID 1 from core 0: 8458.76 33.04 1890.76 997.54 8470.53 00:15:26.207 PCIE (0000:00:12.0) NSID 1 from core 0: 8458.76 33.04 1890.63 988.96 8171.26 00:15:26.207 PCIE (0000:00:12.0) NSID 2 from core 0: 8458.76 33.04 1890.49 977.13 7923.49 00:15:26.207 PCIE (0000:00:12.0) NSID 3 from core 0: 8458.76 33.04 1890.33 999.17 8169.14 00:15:26.207 ======================================================== 00:15:26.207 Total : 50752.54 198.25 1890.48 972.92 8503.85 00:15:26.207 00:15:26.466 10:06:57 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 65612 00:15:26.466 10:06:57 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=65688 00:15:26.466 10:06:57 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:15:26.466 10:06:57 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=65689 00:15:26.466 10:06:57 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:15:26.466 10:06:57 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:15:29.748 Initializing NVMe Controllers 00:15:29.748 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:15:29.748 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:15:29.748 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:15:29.748 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:15:29.748 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:15:29.748 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:15:29.748 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:15:29.748 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:15:29.748 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:15:29.748 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:15:29.748 Initialization complete. Launching workers. 00:15:29.748 ======================================================== 00:15:29.748 Latency(us) 00:15:29.748 Device Information : IOPS MiB/s Average min max 00:15:29.748 PCIE (0000:00:10.0) NSID 1 from core 0: 5616.73 21.94 2846.64 1177.37 6392.59 00:15:29.748 PCIE (0000:00:11.0) NSID 1 from core 0: 5616.73 21.94 2848.39 1199.52 6360.34 00:15:29.748 PCIE (0000:00:13.0) NSID 1 from core 0: 5616.73 21.94 2848.28 1181.72 6274.05 00:15:29.748 PCIE (0000:00:12.0) NSID 1 from core 0: 5622.06 21.96 2845.52 1189.81 6667.93 00:15:29.748 PCIE (0000:00:12.0) NSID 2 from core 0: 5622.06 21.96 2845.44 1200.71 6475.60 00:15:29.748 PCIE (0000:00:12.0) NSID 3 from core 0: 5622.06 21.96 2845.40 1198.68 6054.53 00:15:29.748 ======================================================== 00:15:29.748 Total : 33716.35 131.70 2846.61 1177.37 6667.93 00:15:29.748 00:15:30.006 Initializing NVMe Controllers 00:15:30.006 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:15:30.006 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:15:30.007 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:15:30.007 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:15:30.007 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:15:30.007 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:15:30.007 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:15:30.007 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:15:30.007 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:15:30.007 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:15:30.007 Initialization complete. Launching workers. 00:15:30.007 ======================================================== 00:15:30.007 Latency(us) 00:15:30.007 Device Information : IOPS MiB/s Average min max 00:15:30.007 PCIE (0000:00:10.0) NSID 1 from core 1: 5437.37 21.24 2940.44 1065.77 6781.36 00:15:30.007 PCIE (0000:00:11.0) NSID 1 from core 1: 5437.37 21.24 2941.48 1118.93 6234.52 00:15:30.007 PCIE (0000:00:13.0) NSID 1 from core 1: 5437.37 21.24 2941.12 1133.17 6406.31 00:15:30.007 PCIE (0000:00:12.0) NSID 1 from core 1: 5437.37 21.24 2940.75 1124.28 6767.15 00:15:30.007 PCIE (0000:00:12.0) NSID 2 from core 1: 5437.37 21.24 2940.42 1098.97 6569.13 00:15:30.007 PCIE (0000:00:12.0) NSID 3 from core 1: 5437.37 21.24 2940.29 956.01 6749.86 00:15:30.007 ======================================================== 00:15:30.007 Total : 32624.20 127.44 2940.75 956.01 6781.36 00:15:30.007 00:15:32.539 Initializing NVMe Controllers 00:15:32.539 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:15:32.539 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:15:32.539 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:15:32.539 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:15:32.539 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:15:32.539 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:15:32.539 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:15:32.539 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:15:32.539 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:15:32.539 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:15:32.539 Initialization complete. Launching workers. 00:15:32.539 ======================================================== 00:15:32.539 Latency(us) 00:15:32.539 Device Information : IOPS MiB/s Average min max 00:15:32.539 PCIE (0000:00:10.0) NSID 1 from core 2: 3559.47 13.90 4492.00 1034.76 13128.21 00:15:32.539 PCIE (0000:00:11.0) NSID 1 from core 2: 3559.47 13.90 4494.38 1049.70 16614.17 00:15:32.539 PCIE (0000:00:13.0) NSID 1 from core 2: 3559.47 13.90 4494.55 1067.94 14009.22 00:15:32.539 PCIE (0000:00:12.0) NSID 1 from core 2: 3559.47 13.90 4492.26 1067.73 14142.70 00:15:32.539 PCIE (0000:00:12.0) NSID 2 from core 2: 3559.47 13.90 4490.67 1068.50 14192.03 00:15:32.539 PCIE (0000:00:12.0) NSID 3 from core 2: 3559.47 13.90 4490.61 1067.09 14120.97 00:15:32.539 ======================================================== 00:15:32.539 Total : 21356.80 83.42 4492.41 1034.76 16614.17 00:15:32.539 00:15:32.539 ************************************ 00:15:32.539 END TEST nvme_multi_secondary 00:15:32.539 ************************************ 00:15:32.539 10:07:03 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 65688 00:15:32.539 10:07:03 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 65689 00:15:32.539 00:15:32.539 real 0m11.767s 00:15:32.539 user 0m19.393s 00:15:32.539 sys 0m1.079s 00:15:32.539 10:07:03 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:32.539 10:07:03 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:15:32.539 10:07:03 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:15:32.539 10:07:03 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:15:32.539 10:07:03 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/64599 ]] 00:15:32.539 10:07:03 nvme -- common/autotest_common.sh@1094 -- # kill 64599 00:15:32.539 10:07:03 nvme -- common/autotest_common.sh@1095 -- # wait 64599 00:15:32.539 [2024-12-09 10:07:03.146545] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65554) is not found. Dropping the request. 00:15:32.539 [2024-12-09 10:07:03.146669] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65554) is not found. Dropping the request. 00:15:32.539 [2024-12-09 10:07:03.146732] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65554) is not found. Dropping the request. 00:15:32.539 [2024-12-09 10:07:03.146764] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65554) is not found. Dropping the request. 00:15:32.539 [2024-12-09 10:07:03.150104] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65554) is not found. Dropping the request. 00:15:32.539 [2024-12-09 10:07:03.150183] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65554) is not found. Dropping the request. 00:15:32.539 [2024-12-09 10:07:03.150212] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65554) is not found. Dropping the request. 00:15:32.539 [2024-12-09 10:07:03.150241] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65554) is not found. Dropping the request. 00:15:32.539 [2024-12-09 10:07:03.153590] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65554) is not found. Dropping the request. 00:15:32.539 [2024-12-09 10:07:03.153685] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65554) is not found. Dropping the request. 00:15:32.539 [2024-12-09 10:07:03.153714] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65554) is not found. Dropping the request. 00:15:32.539 [2024-12-09 10:07:03.153744] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65554) is not found. Dropping the request. 00:15:32.539 [2024-12-09 10:07:03.156715] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65554) is not found. Dropping the request. 00:15:32.539 [2024-12-09 10:07:03.156785] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65554) is not found. Dropping the request. 00:15:32.539 [2024-12-09 10:07:03.156808] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65554) is not found. Dropping the request. 00:15:32.539 [2024-12-09 10:07:03.156846] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65554) is not found. Dropping the request. 00:15:32.805 [2024-12-09 10:07:03.453494] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited. 00:15:32.805 10:07:03 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:15:32.805 10:07:03 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:15:32.805 10:07:03 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:15:32.805 10:07:03 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:32.805 10:07:03 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:32.805 10:07:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:15:32.805 ************************************ 00:15:32.805 START TEST bdev_nvme_reset_stuck_adm_cmd 00:15:32.805 ************************************ 00:15:32.805 10:07:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:15:32.805 * Looking for test storage... 00:15:32.805 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:15:32.805 10:07:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:32.805 10:07:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lcov --version 00:15:32.805 10:07:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:33.065 10:07:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:33.065 10:07:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:33.065 10:07:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:33.065 10:07:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:33.065 10:07:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:15:33.065 10:07:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:15:33.065 10:07:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:15:33.065 10:07:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:15:33.065 10:07:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:15:33.065 10:07:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:15:33.065 10:07:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:15:33.065 10:07:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:33.065 10:07:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:15:33.065 10:07:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:15:33.065 10:07:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:33.065 10:07:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:33.065 10:07:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:15:33.065 10:07:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:15:33.065 10:07:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:33.065 10:07:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:15:33.065 10:07:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:15:33.065 10:07:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:15:33.065 10:07:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:15:33.065 10:07:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:33.065 10:07:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:15:33.065 10:07:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:15:33.065 10:07:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:33.065 10:07:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:33.065 10:07:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:15:33.065 10:07:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:33.065 10:07:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:33.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.065 --rc genhtml_branch_coverage=1 00:15:33.065 --rc genhtml_function_coverage=1 00:15:33.065 --rc genhtml_legend=1 00:15:33.065 --rc geninfo_all_blocks=1 00:15:33.065 --rc geninfo_unexecuted_blocks=1 00:15:33.065 00:15:33.065 ' 00:15:33.065 10:07:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:33.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.065 --rc genhtml_branch_coverage=1 00:15:33.065 --rc genhtml_function_coverage=1 00:15:33.065 --rc genhtml_legend=1 00:15:33.065 --rc geninfo_all_blocks=1 00:15:33.065 --rc geninfo_unexecuted_blocks=1 00:15:33.065 00:15:33.065 ' 00:15:33.065 10:07:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:33.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.065 --rc genhtml_branch_coverage=1 00:15:33.065 --rc genhtml_function_coverage=1 00:15:33.065 --rc genhtml_legend=1 00:15:33.065 --rc geninfo_all_blocks=1 00:15:33.065 --rc geninfo_unexecuted_blocks=1 00:15:33.065 00:15:33.065 ' 00:15:33.065 10:07:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:33.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.065 --rc genhtml_branch_coverage=1 00:15:33.065 --rc genhtml_function_coverage=1 00:15:33.065 --rc genhtml_legend=1 00:15:33.065 --rc geninfo_all_blocks=1 00:15:33.065 --rc geninfo_unexecuted_blocks=1 00:15:33.065 00:15:33.065 ' 00:15:33.065 10:07:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:15:33.065 10:07:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:15:33.065 10:07:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:15:33.065 10:07:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:15:33.065 10:07:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:15:33.065 10:07:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:15:33.065 10:07:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:15:33.065 10:07:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:15:33.065 10:07:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:15:33.065 10:07:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:15:33.065 10:07:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:15:33.065 10:07:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:15:33.065 10:07:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:15:33.065 10:07:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:33.065 10:07:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:15:33.065 10:07:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:15:33.065 10:07:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:15:33.065 10:07:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:15:33.065 10:07:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:15:33.065 10:07:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:15:33.065 10:07:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=65856 00:15:33.065 10:07:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:15:33.065 10:07:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:15:33.065 10:07:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 65856 00:15:33.065 10:07:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 65856 ']' 00:15:33.065 10:07:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:33.065 10:07:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:33.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:33.065 10:07:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:33.065 10:07:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:33.065 10:07:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:15:33.324 [2024-12-09 10:07:03.872112] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:15:33.324 [2024-12-09 10:07:03.872288] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65856 ] 00:15:33.324 [2024-12-09 10:07:04.081336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:33.582 [2024-12-09 10:07:04.263667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:33.582 [2024-12-09 10:07:04.263821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:33.582 [2024-12-09 10:07:04.264259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.582 [2024-12-09 10:07:04.264277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:34.520 10:07:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:34.520 10:07:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:15:34.520 10:07:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:15:34.520 10:07:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.520 10:07:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:15:34.520 nvme0n1 00:15:34.520 10:07:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.520 10:07:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:15:34.520 10:07:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_Z3QaQ.txt 00:15:34.520 10:07:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:15:34.520 10:07:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.520 10:07:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:15:34.780 true 00:15:34.780 10:07:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.780 10:07:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:15:34.780 10:07:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1733738825 00:15:34.780 10:07:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=65890 00:15:34.780 10:07:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:15:34.780 10:07:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:15:34.780 10:07:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:15:36.682 10:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:15:36.682 10:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.682 10:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:15:36.682 [2024-12-09 10:07:07.338627] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:15:36.682 [2024-12-09 10:07:07.339531] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:36.682 [2024-12-09 10:07:07.339689] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:36.682 [2024-12-09 10:07:07.339807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.682 [2024-12-09 10:07:07.341757] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:15:36.682 10:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.682 10:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 65890 00:15:36.682 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 65890 00:15:36.682 10:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 65890 00:15:36.682 10:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:15:36.682 10:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:15:36.682 10:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:36.682 10:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.682 10:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:15:36.682 10:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.682 10:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:15:36.682 10:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_Z3QaQ.txt 00:15:36.682 10:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:15:36.682 10:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:15:36.682 10:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:15:36.682 10:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:15:36.682 10:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:15:36.682 10:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:15:36.682 10:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:15:36.682 10:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:15:36.682 10:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:15:36.682 10:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:15:36.682 10:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:15:36.682 10:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:15:36.682 10:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:15:36.682 10:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:15:36.682 10:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:15:36.682 10:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:15:36.682 10:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:15:36.682 10:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:15:36.682 10:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:15:36.682 10:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_Z3QaQ.txt 00:15:36.682 10:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 65856 00:15:36.682 10:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 65856 ']' 00:15:36.682 10:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 65856 00:15:36.682 10:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:15:36.682 10:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:36.682 10:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65856 00:15:36.941 10:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:36.941 10:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:36.941 10:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65856' 00:15:36.941 killing process with pid 65856 00:15:36.941 10:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 65856 00:15:36.941 10:07:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 65856 00:15:39.476 10:07:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:15:39.476 10:07:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:15:39.476 00:15:39.476 real 0m6.553s 00:15:39.476 user 0m22.679s 00:15:39.476 sys 0m0.883s 00:15:39.476 10:07:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:39.476 ************************************ 00:15:39.476 END TEST bdev_nvme_reset_stuck_adm_cmd 00:15:39.476 ************************************ 00:15:39.476 10:07:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:15:39.476 10:07:10 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:15:39.476 10:07:10 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:15:39.476 10:07:10 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:39.476 10:07:10 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:39.476 10:07:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:15:39.476 ************************************ 00:15:39.476 START TEST nvme_fio 00:15:39.476 ************************************ 00:15:39.476 10:07:10 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:15:39.476 10:07:10 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:15:39.476 10:07:10 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:15:39.476 10:07:10 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:15:39.476 10:07:10 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:15:39.476 10:07:10 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:15:39.476 10:07:10 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:15:39.476 10:07:10 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:39.476 10:07:10 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:15:39.476 10:07:10 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:15:39.476 10:07:10 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:15:39.476 10:07:10 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:15:39.476 10:07:10 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:15:39.476 10:07:10 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:15:39.476 10:07:10 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:15:39.476 10:07:10 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:15:40.043 10:07:10 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:15:40.043 10:07:10 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:15:40.301 10:07:10 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:15:40.301 10:07:10 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:15:40.301 10:07:10 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:15:40.301 10:07:10 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:40.301 10:07:10 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:40.301 10:07:10 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:40.301 10:07:10 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:40.301 10:07:10 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:15:40.301 10:07:10 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:40.301 10:07:10 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:40.301 10:07:10 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:40.301 10:07:10 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:15:40.301 10:07:10 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:40.301 10:07:10 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:40.301 10:07:10 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:40.301 10:07:10 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:15:40.301 10:07:10 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:40.301 10:07:10 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:15:40.558 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:40.558 fio-3.35 00:15:40.558 Starting 1 thread 00:15:43.861 00:15:43.861 test: (groupid=0, jobs=1): err= 0: pid=66044: Mon Dec 9 10:07:14 2024 00:15:43.861 read: IOPS=16.0k, BW=62.4MiB/s (65.4MB/s)(125MiB/2001msec) 00:15:43.861 slat (usec): min=4, max=174, avg= 6.57, stdev= 2.58 00:15:43.861 clat (usec): min=293, max=8059, avg=3979.71, stdev=817.94 00:15:43.861 lat (usec): min=302, max=8064, avg=3986.28, stdev=819.44 00:15:43.861 clat percentiles (usec): 00:15:43.861 | 1.00th=[ 3130], 5.00th=[ 3359], 10.00th=[ 3458], 20.00th=[ 3523], 00:15:43.861 | 30.00th=[ 3589], 40.00th=[ 3621], 50.00th=[ 3687], 60.00th=[ 3752], 00:15:43.861 | 70.00th=[ 3949], 80.00th=[ 4359], 90.00th=[ 4817], 95.00th=[ 5473], 00:15:43.861 | 99.00th=[ 7308], 99.50th=[ 7373], 99.90th=[ 7570], 99.95th=[ 7635], 00:15:43.861 | 99.99th=[ 7898] 00:15:43.861 bw ( KiB/s): min=65576, max=70824, per=100.00%, avg=67936.00, stdev=2663.54, samples=3 00:15:43.861 iops : min=16394, max=17706, avg=16984.00, stdev=665.89, samples=3 00:15:43.861 write: IOPS=16.0k, BW=62.5MiB/s (65.6MB/s)(125MiB/2001msec); 0 zone resets 00:15:43.861 slat (usec): min=4, max=670, avg= 6.68, stdev= 4.51 00:15:43.861 clat (usec): min=259, max=8139, avg=3994.55, stdev=820.11 00:15:43.861 lat (usec): min=267, max=8145, avg=4001.24, stdev=821.63 00:15:43.861 clat percentiles (usec): 00:15:43.861 | 1.00th=[ 3163], 5.00th=[ 3359], 10.00th=[ 3458], 20.00th=[ 3523], 00:15:43.861 | 30.00th=[ 3589], 40.00th=[ 3654], 50.00th=[ 3687], 60.00th=[ 3752], 00:15:43.861 | 70.00th=[ 3982], 80.00th=[ 4359], 90.00th=[ 4817], 95.00th=[ 5538], 00:15:43.861 | 99.00th=[ 7242], 99.50th=[ 7373], 99.90th=[ 7635], 99.95th=[ 7767], 00:15:43.861 | 99.99th=[ 7898] 00:15:43.861 bw ( KiB/s): min=66000, max=70336, per=100.00%, avg=67784.00, stdev=2267.73, samples=3 00:15:43.861 iops : min=16500, max=17584, avg=16946.00, stdev=566.93, samples=3 00:15:43.861 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.01% 00:15:43.861 lat (msec) : 2=0.05%, 4=70.75%, 10=29.17% 00:15:43.861 cpu : usr=98.25%, sys=0.35%, ctx=29, majf=0, minf=607 00:15:43.861 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:43.861 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:43.861 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:43.861 issued rwts: total=31953,32029,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:43.861 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:43.861 00:15:43.861 Run status group 0 (all jobs): 00:15:43.861 READ: bw=62.4MiB/s (65.4MB/s), 62.4MiB/s-62.4MiB/s (65.4MB/s-65.4MB/s), io=125MiB (131MB), run=2001-2001msec 00:15:43.861 WRITE: bw=62.5MiB/s (65.6MB/s), 62.5MiB/s-62.5MiB/s (65.6MB/s-65.6MB/s), io=125MiB (131MB), run=2001-2001msec 00:15:43.861 ----------------------------------------------------- 00:15:43.861 Suppressions used: 00:15:43.861 count bytes template 00:15:43.861 1 32 /usr/src/fio/parse.c 00:15:43.861 1 8 libtcmalloc_minimal.so 00:15:43.861 ----------------------------------------------------- 00:15:43.861 00:15:43.861 10:07:14 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:15:43.861 10:07:14 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:15:43.861 10:07:14 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:15:43.861 10:07:14 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:15:44.429 10:07:14 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:15:44.429 10:07:14 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:15:44.745 10:07:15 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:15:44.745 10:07:15 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:15:44.745 10:07:15 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:15:44.745 10:07:15 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:44.745 10:07:15 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:44.745 10:07:15 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:44.745 10:07:15 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:44.745 10:07:15 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:15:44.745 10:07:15 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:44.745 10:07:15 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:44.745 10:07:15 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:44.745 10:07:15 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:44.745 10:07:15 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:15:44.745 10:07:15 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:44.745 10:07:15 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:44.745 10:07:15 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:15:44.745 10:07:15 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:44.745 10:07:15 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:15:44.745 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:44.745 fio-3.35 00:15:44.745 Starting 1 thread 00:15:48.055 00:15:48.055 test: (groupid=0, jobs=1): err= 0: pid=66110: Mon Dec 9 10:07:18 2024 00:15:48.055 read: IOPS=16.5k, BW=64.4MiB/s (67.5MB/s)(129MiB/2001msec) 00:15:48.055 slat (usec): min=4, max=112, avg= 6.24, stdev= 1.97 00:15:48.055 clat (usec): min=297, max=9165, avg=3856.51, stdev=445.95 00:15:48.055 lat (usec): min=303, max=9278, avg=3862.75, stdev=446.59 00:15:48.055 clat percentiles (usec): 00:15:48.055 | 1.00th=[ 3228], 5.00th=[ 3392], 10.00th=[ 3458], 20.00th=[ 3556], 00:15:48.055 | 30.00th=[ 3621], 40.00th=[ 3654], 50.00th=[ 3720], 60.00th=[ 3818], 00:15:48.055 | 70.00th=[ 4015], 80.00th=[ 4228], 90.00th=[ 4424], 95.00th=[ 4555], 00:15:48.055 | 99.00th=[ 4817], 99.50th=[ 5669], 99.90th=[ 7439], 99.95th=[ 7832], 00:15:48.055 | 99.99th=[ 8979] 00:15:48.055 bw ( KiB/s): min=63840, max=71056, per=100.00%, avg=66269.33, stdev=4145.53, samples=3 00:15:48.055 iops : min=15960, max=17764, avg=16567.33, stdev=1036.38, samples=3 00:15:48.055 write: IOPS=16.5k, BW=64.5MiB/s (67.7MB/s)(129MiB/2001msec); 0 zone resets 00:15:48.055 slat (nsec): min=4734, max=66434, avg=6397.03, stdev=1926.64 00:15:48.055 clat (usec): min=487, max=9064, avg=3866.31, stdev=440.34 00:15:48.055 lat (usec): min=494, max=9076, avg=3872.71, stdev=440.95 00:15:48.055 clat percentiles (usec): 00:15:48.055 | 1.00th=[ 3228], 5.00th=[ 3425], 10.00th=[ 3490], 20.00th=[ 3556], 00:15:48.055 | 30.00th=[ 3621], 40.00th=[ 3654], 50.00th=[ 3720], 60.00th=[ 3818], 00:15:48.055 | 70.00th=[ 4015], 80.00th=[ 4293], 90.00th=[ 4424], 95.00th=[ 4555], 00:15:48.055 | 99.00th=[ 4817], 99.50th=[ 5669], 99.90th=[ 7373], 99.95th=[ 8029], 00:15:48.055 | 99.99th=[ 8848] 00:15:48.055 bw ( KiB/s): min=63784, max=70648, per=100.00%, avg=66197.33, stdev=3858.97, samples=3 00:15:48.055 iops : min=15946, max=17662, avg=16549.33, stdev=964.74, samples=3 00:15:48.055 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:15:48.055 lat (msec) : 2=0.09%, 4=69.85%, 10=30.03% 00:15:48.055 cpu : usr=98.70%, sys=0.25%, ctx=33, majf=0, minf=607 00:15:48.055 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:48.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:48.055 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:48.055 issued rwts: total=32996,33063,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:48.055 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:48.055 00:15:48.055 Run status group 0 (all jobs): 00:15:48.055 READ: bw=64.4MiB/s (67.5MB/s), 64.4MiB/s-64.4MiB/s (67.5MB/s-67.5MB/s), io=129MiB (135MB), run=2001-2001msec 00:15:48.055 WRITE: bw=64.5MiB/s (67.7MB/s), 64.5MiB/s-64.5MiB/s (67.7MB/s-67.7MB/s), io=129MiB (135MB), run=2001-2001msec 00:15:48.620 ----------------------------------------------------- 00:15:48.620 Suppressions used: 00:15:48.620 count bytes template 00:15:48.620 1 32 /usr/src/fio/parse.c 00:15:48.620 1 8 libtcmalloc_minimal.so 00:15:48.620 ----------------------------------------------------- 00:15:48.620 00:15:48.620 10:07:19 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:15:48.620 10:07:19 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:15:48.620 10:07:19 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:15:48.620 10:07:19 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:15:48.879 10:07:19 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:15:48.879 10:07:19 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:15:49.446 10:07:19 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:15:49.446 10:07:19 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:15:49.446 10:07:19 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:15:49.446 10:07:19 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:49.446 10:07:19 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:49.446 10:07:19 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:49.446 10:07:19 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:49.446 10:07:19 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:15:49.446 10:07:19 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:49.446 10:07:19 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:49.446 10:07:19 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:49.446 10:07:19 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:49.446 10:07:19 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:15:49.446 10:07:19 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:49.446 10:07:19 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:49.446 10:07:19 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:15:49.446 10:07:19 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:49.446 10:07:19 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:15:49.446 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:49.446 fio-3.35 00:15:49.446 Starting 1 thread 00:15:52.731 00:15:52.731 test: (groupid=0, jobs=1): err= 0: pid=66176: Mon Dec 9 10:07:23 2024 00:15:52.731 read: IOPS=15.6k, BW=60.8MiB/s (63.8MB/s)(122MiB/2001msec) 00:15:52.731 slat (nsec): min=4683, max=81522, avg=6569.50, stdev=1921.82 00:15:52.731 clat (usec): min=255, max=9295, avg=4087.06, stdev=467.35 00:15:52.731 lat (usec): min=262, max=9377, avg=4093.63, stdev=468.00 00:15:52.731 clat percentiles (usec): 00:15:52.732 | 1.00th=[ 2966], 5.00th=[ 3589], 10.00th=[ 3687], 20.00th=[ 3785], 00:15:52.732 | 30.00th=[ 3884], 40.00th=[ 3949], 50.00th=[ 3982], 60.00th=[ 4047], 00:15:52.732 | 70.00th=[ 4146], 80.00th=[ 4424], 90.00th=[ 4686], 95.00th=[ 4948], 00:15:52.732 | 99.00th=[ 5211], 99.50th=[ 5473], 99.90th=[ 7570], 99.95th=[ 7963], 00:15:52.732 | 99.99th=[ 9110] 00:15:52.732 bw ( KiB/s): min=58128, max=63984, per=97.51%, avg=60725.33, stdev=2983.49, samples=3 00:15:52.732 iops : min=14532, max=15996, avg=15181.33, stdev=745.87, samples=3 00:15:52.732 write: IOPS=15.6k, BW=60.9MiB/s (63.8MB/s)(122MiB/2001msec); 0 zone resets 00:15:52.732 slat (nsec): min=4813, max=54127, avg=6686.80, stdev=2019.03 00:15:52.732 clat (usec): min=296, max=9120, avg=4098.83, stdev=466.79 00:15:52.732 lat (usec): min=302, max=9132, avg=4105.52, stdev=467.41 00:15:52.732 clat percentiles (usec): 00:15:52.732 | 1.00th=[ 2999], 5.00th=[ 3589], 10.00th=[ 3720], 20.00th=[ 3818], 00:15:52.732 | 30.00th=[ 3884], 40.00th=[ 3949], 50.00th=[ 4015], 60.00th=[ 4080], 00:15:52.732 | 70.00th=[ 4146], 80.00th=[ 4490], 90.00th=[ 4752], 95.00th=[ 4948], 00:15:52.732 | 99.00th=[ 5276], 99.50th=[ 5538], 99.90th=[ 7504], 99.95th=[ 7832], 00:15:52.732 | 99.99th=[ 8848] 00:15:52.732 bw ( KiB/s): min=58408, max=63488, per=96.86%, avg=60357.33, stdev=2738.30, samples=3 00:15:52.732 iops : min=14602, max=15872, avg=15089.33, stdev=684.57, samples=3 00:15:52.732 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:15:52.732 lat (msec) : 2=0.06%, 4=50.33%, 10=49.57% 00:15:52.732 cpu : usr=98.90%, sys=0.20%, ctx=3, majf=0, minf=608 00:15:52.732 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:52.732 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:52.732 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:52.732 issued rwts: total=31154,31174,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:52.732 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:52.732 00:15:52.732 Run status group 0 (all jobs): 00:15:52.732 READ: bw=60.8MiB/s (63.8MB/s), 60.8MiB/s-60.8MiB/s (63.8MB/s-63.8MB/s), io=122MiB (128MB), run=2001-2001msec 00:15:52.732 WRITE: bw=60.9MiB/s (63.8MB/s), 60.9MiB/s-60.9MiB/s (63.8MB/s-63.8MB/s), io=122MiB (128MB), run=2001-2001msec 00:15:53.299 ----------------------------------------------------- 00:15:53.299 Suppressions used: 00:15:53.299 count bytes template 00:15:53.299 1 32 /usr/src/fio/parse.c 00:15:53.299 1 8 libtcmalloc_minimal.so 00:15:53.299 ----------------------------------------------------- 00:15:53.299 00:15:53.299 10:07:23 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:15:53.299 10:07:23 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:15:53.299 10:07:23 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:15:53.299 10:07:23 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:15:53.575 10:07:24 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:15:53.575 10:07:24 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:15:54.142 10:07:24 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:15:54.142 10:07:24 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:15:54.142 10:07:24 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:15:54.142 10:07:24 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:54.142 10:07:24 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:54.142 10:07:24 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:54.142 10:07:24 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:54.142 10:07:24 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:15:54.142 10:07:24 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:54.142 10:07:24 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:54.142 10:07:24 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:15:54.142 10:07:24 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:54.142 10:07:24 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:54.142 10:07:24 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:54.142 10:07:24 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:54.142 10:07:24 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:15:54.142 10:07:24 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:54.142 10:07:24 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:15:54.142 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:54.142 fio-3.35 00:15:54.142 Starting 1 thread 00:15:58.326 00:15:58.326 test: (groupid=0, jobs=1): err= 0: pid=66244: Mon Dec 9 10:07:28 2024 00:15:58.326 read: IOPS=16.3k, BW=63.8MiB/s (66.9MB/s)(128MiB/2001msec) 00:15:58.326 slat (nsec): min=4685, max=54536, avg=6358.91, stdev=1702.79 00:15:58.326 clat (usec): min=335, max=8893, avg=3891.68, stdev=488.02 00:15:58.326 lat (usec): min=341, max=8948, avg=3898.04, stdev=488.69 00:15:58.326 clat percentiles (usec): 00:15:58.326 | 1.00th=[ 3228], 5.00th=[ 3458], 10.00th=[ 3523], 20.00th=[ 3621], 00:15:58.326 | 30.00th=[ 3654], 40.00th=[ 3687], 50.00th=[ 3752], 60.00th=[ 3785], 00:15:58.326 | 70.00th=[ 3884], 80.00th=[ 4293], 90.00th=[ 4490], 95.00th=[ 4621], 00:15:58.326 | 99.00th=[ 5669], 99.50th=[ 6652], 99.90th=[ 8160], 99.95th=[ 8291], 00:15:58.326 | 99.99th=[ 8717] 00:15:58.326 bw ( KiB/s): min=60960, max=67488, per=98.67%, avg=64482.67, stdev=3294.60, samples=3 00:15:58.326 iops : min=15240, max=16872, avg=16120.67, stdev=823.65, samples=3 00:15:58.326 write: IOPS=16.4k, BW=63.9MiB/s (67.1MB/s)(128MiB/2001msec); 0 zone resets 00:15:58.326 slat (nsec): min=4814, max=47702, avg=6517.00, stdev=1731.53 00:15:58.326 clat (usec): min=249, max=8747, avg=3905.10, stdev=492.64 00:15:58.326 lat (usec): min=255, max=8755, avg=3911.62, stdev=493.27 00:15:58.326 clat percentiles (usec): 00:15:58.326 | 1.00th=[ 3228], 5.00th=[ 3490], 10.00th=[ 3556], 20.00th=[ 3621], 00:15:58.326 | 30.00th=[ 3687], 40.00th=[ 3720], 50.00th=[ 3752], 60.00th=[ 3818], 00:15:58.327 | 70.00th=[ 3884], 80.00th=[ 4359], 90.00th=[ 4490], 95.00th=[ 4621], 00:15:58.327 | 99.00th=[ 5604], 99.50th=[ 6783], 99.90th=[ 8225], 99.95th=[ 8356], 00:15:58.327 | 99.99th=[ 8586] 00:15:58.327 bw ( KiB/s): min=61272, max=66680, per=98.09%, avg=64229.33, stdev=2739.37, samples=3 00:15:58.327 iops : min=15318, max=16670, avg=16057.33, stdev=684.84, samples=3 00:15:58.327 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:15:58.327 lat (msec) : 2=0.05%, 4=76.16%, 10=23.75% 00:15:58.327 cpu : usr=98.80%, sys=0.30%, ctx=23, majf=0, minf=606 00:15:58.327 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:58.327 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:58.327 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:58.327 issued rwts: total=32693,32757,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:58.327 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:58.327 00:15:58.327 Run status group 0 (all jobs): 00:15:58.327 READ: bw=63.8MiB/s (66.9MB/s), 63.8MiB/s-63.8MiB/s (66.9MB/s-66.9MB/s), io=128MiB (134MB), run=2001-2001msec 00:15:58.327 WRITE: bw=63.9MiB/s (67.1MB/s), 63.9MiB/s-63.9MiB/s (67.1MB/s-67.1MB/s), io=128MiB (134MB), run=2001-2001msec 00:15:58.585 ----------------------------------------------------- 00:15:58.585 Suppressions used: 00:15:58.585 count bytes template 00:15:58.585 1 32 /usr/src/fio/parse.c 00:15:58.585 1 8 libtcmalloc_minimal.so 00:15:58.585 ----------------------------------------------------- 00:15:58.585 00:15:58.585 10:07:29 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:15:58.585 10:07:29 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:15:58.585 00:15:58.585 real 0m19.227s 00:15:58.585 user 0m15.641s 00:15:58.585 sys 0m1.977s 00:15:58.585 10:07:29 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:58.585 10:07:29 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:15:58.585 ************************************ 00:15:58.585 END TEST nvme_fio 00:15:58.585 ************************************ 00:15:58.585 00:15:58.585 real 1m36.891s 00:15:58.585 user 3m56.832s 00:15:58.585 sys 0m15.507s 00:15:58.585 10:07:29 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:58.585 ************************************ 00:15:58.585 10:07:29 nvme -- common/autotest_common.sh@10 -- # set +x 00:15:58.585 END TEST nvme 00:15:58.585 ************************************ 00:15:58.843 10:07:29 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:15:58.843 10:07:29 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:15:58.843 10:07:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:58.843 10:07:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:58.843 10:07:29 -- common/autotest_common.sh@10 -- # set +x 00:15:58.843 ************************************ 00:15:58.843 START TEST nvme_scc 00:15:58.843 ************************************ 00:15:58.843 10:07:29 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:15:58.843 * Looking for test storage... 00:15:58.843 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:15:58.843 10:07:29 nvme_scc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:58.843 10:07:29 nvme_scc -- common/autotest_common.sh@1711 -- # lcov --version 00:15:58.843 10:07:29 nvme_scc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:58.843 10:07:29 nvme_scc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:58.843 10:07:29 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:58.843 10:07:29 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:58.843 10:07:29 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:58.843 10:07:29 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:15:58.843 10:07:29 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:15:58.843 10:07:29 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:15:58.843 10:07:29 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:15:58.843 10:07:29 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:15:58.843 10:07:29 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:15:58.843 10:07:29 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:15:58.843 10:07:29 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:58.843 10:07:29 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:15:58.843 10:07:29 nvme_scc -- scripts/common.sh@345 -- # : 1 00:15:58.843 10:07:29 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:58.843 10:07:29 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:58.843 10:07:29 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:15:58.843 10:07:29 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:15:58.843 10:07:29 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:58.843 10:07:29 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:15:58.843 10:07:29 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:15:58.843 10:07:29 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:15:58.843 10:07:29 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:15:58.843 10:07:29 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:58.843 10:07:29 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:15:58.843 10:07:29 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:15:58.843 10:07:29 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:58.843 10:07:29 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:58.843 10:07:29 nvme_scc -- scripts/common.sh@368 -- # return 0 00:15:58.843 10:07:29 nvme_scc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:58.843 10:07:29 nvme_scc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:58.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:58.843 --rc genhtml_branch_coverage=1 00:15:58.843 --rc genhtml_function_coverage=1 00:15:58.843 --rc genhtml_legend=1 00:15:58.843 --rc geninfo_all_blocks=1 00:15:58.843 --rc geninfo_unexecuted_blocks=1 00:15:58.843 00:15:58.843 ' 00:15:58.843 10:07:29 nvme_scc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:58.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:58.843 --rc genhtml_branch_coverage=1 00:15:58.843 --rc genhtml_function_coverage=1 00:15:58.843 --rc genhtml_legend=1 00:15:58.843 --rc geninfo_all_blocks=1 00:15:58.843 --rc geninfo_unexecuted_blocks=1 00:15:58.843 00:15:58.843 ' 00:15:58.843 10:07:29 nvme_scc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:58.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:58.843 --rc genhtml_branch_coverage=1 00:15:58.843 --rc genhtml_function_coverage=1 00:15:58.843 --rc genhtml_legend=1 00:15:58.843 --rc geninfo_all_blocks=1 00:15:58.843 --rc geninfo_unexecuted_blocks=1 00:15:58.843 00:15:58.843 ' 00:15:58.843 10:07:29 nvme_scc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:58.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:58.843 --rc genhtml_branch_coverage=1 00:15:58.843 --rc genhtml_function_coverage=1 00:15:58.843 --rc genhtml_legend=1 00:15:58.843 --rc geninfo_all_blocks=1 00:15:58.843 --rc geninfo_unexecuted_blocks=1 00:15:58.843 00:15:58.843 ' 00:15:58.843 10:07:29 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:15:58.843 10:07:29 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:15:58.843 10:07:29 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:15:58.843 10:07:29 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:58.843 10:07:29 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:58.843 10:07:29 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:15:58.843 10:07:29 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:58.843 10:07:29 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:58.843 10:07:29 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:58.843 10:07:29 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.844 10:07:29 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.844 10:07:29 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.844 10:07:29 nvme_scc -- paths/export.sh@5 -- # export PATH 00:15:58.844 10:07:29 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.844 10:07:29 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:15:58.844 10:07:29 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:15:58.844 10:07:29 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:15:58.844 10:07:29 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:15:58.844 10:07:29 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:15:58.844 10:07:29 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:15:58.844 10:07:29 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:15:58.844 10:07:29 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:15:58.844 10:07:29 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:15:58.844 10:07:29 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:58.844 10:07:29 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:15:58.844 10:07:29 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:15:58.844 10:07:29 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:15:58.844 10:07:29 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:59.414 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:59.414 Waiting for block devices as requested 00:15:59.414 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:15:59.673 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:15:59.673 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:15:59.673 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:16:04.942 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:16:04.942 10:07:35 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:16:04.942 10:07:35 nvme_scc -- scripts/common.sh@18 -- # local i 00:16:04.942 10:07:35 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:16:04.942 10:07:35 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:16:04.942 10:07:35 nvme_scc -- scripts/common.sh@27 -- # return 0 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@18 -- # shift 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.942 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.943 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.944 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@18 -- # shift 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.945 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@18 -- # shift 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:04.946 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.947 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:16:04.948 10:07:35 nvme_scc -- scripts/common.sh@18 -- # local i 00:16:04.948 10:07:35 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:16:04.948 10:07:35 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:16:04.948 10:07:35 nvme_scc -- scripts/common.sh@27 -- # return 0 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@18 -- # shift 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:16:04.948 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:16:04.949 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:16:04.950 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.950 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.950 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.950 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:16:04.950 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:16:04.950 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.950 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.950 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.950 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:16:04.950 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:16:04.950 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.950 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.950 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.950 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:16:04.950 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:16:04.950 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.950 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.950 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.950 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:16:04.950 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:16:04.950 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.950 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.950 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.950 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:16:04.950 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:16:04.950 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.950 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.950 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.950 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:16:04.950 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:16:04.950 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.950 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.950 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.950 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:16:04.950 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:16:04.950 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.950 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.950 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.950 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:16:04.950 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:16:04.950 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.950 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:04.950 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:04.950 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:16:04.950 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:16:04.950 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:04.950 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.215 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.215 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:16:05.215 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:16:05.215 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.215 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.215 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.215 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:16:05.215 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:16:05.215 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.215 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.215 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.215 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:16:05.215 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:16:05.215 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.215 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.215 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.215 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:16:05.215 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:16:05.215 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.215 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.215 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.215 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:16:05.215 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:16:05.215 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.215 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.215 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.215 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:16:05.215 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:16:05.215 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.215 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.215 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.215 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:16:05.215 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:16:05.215 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.215 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.215 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.215 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:16:05.215 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:16:05.215 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.215 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.215 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:16:05.215 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:16:05.215 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:16:05.215 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.215 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.215 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:16:05.215 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:16:05.215 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:16:05.215 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.215 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.215 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.215 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:16:05.215 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:16:05.215 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.215 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.215 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:16:05.215 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:16:05.215 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:16:05.215 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.215 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.215 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@18 -- # shift 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:05.216 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:05.217 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@18 -- # shift 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:16:05.218 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:16:05.219 10:07:35 nvme_scc -- scripts/common.sh@18 -- # local i 00:16:05.219 10:07:35 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:16:05.219 10:07:35 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:16:05.219 10:07:35 nvme_scc -- scripts/common.sh@27 -- # return 0 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@18 -- # shift 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:16:05.219 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.220 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.221 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@18 -- # shift 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:16:05.222 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.223 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@18 -- # shift 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.224 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@18 -- # shift 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.225 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:16:05.226 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@18 -- # shift 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:16:05.227 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.228 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.228 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.228 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:16:05.228 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:16:05.228 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.228 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.228 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.228 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:16:05.228 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:16:05.228 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.228 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.228 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.228 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:16:05.228 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:16:05.228 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.228 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.228 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.228 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:16:05.228 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:16:05.228 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.228 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.228 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.228 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:16:05.228 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:16:05.228 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.228 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.228 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.228 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:16:05.228 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:16:05.228 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.228 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.228 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.228 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:16:05.228 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:16:05.228 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.228 10:07:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.228 10:07:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.228 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:16:05.228 10:07:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:16:05.228 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.228 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.228 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.228 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:16:05.228 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:16:05.228 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.228 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.228 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.228 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:16:05.228 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:16:05.228 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.228 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.228 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:05.228 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:16:05.228 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:16:05.228 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.228 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.228 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:05.228 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:16:05.228 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:16:05.228 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.228 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.228 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:05.228 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:16:05.228 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:16:05.228 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.228 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.228 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.228 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:16:05.228 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:16:05.228 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.228 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.228 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.228 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:16:05.228 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:16:05.228 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.228 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.228 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.228 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:16:05.228 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:16:05.228 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.228 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.228 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.228 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:16:05.228 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:16:05.490 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.490 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.490 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.490 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:16:05.490 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:16:05.490 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.490 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.490 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:05.490 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:16:05.490 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:16:05.490 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.490 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.490 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:05.490 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:16:05.490 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:16:05.490 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.490 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.490 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:05.490 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:05.490 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:05.490 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.490 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.490 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:05.490 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:05.490 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:05.490 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.490 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.490 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:05.490 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:05.490 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:05.490 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.490 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.490 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:05.490 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:05.490 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:05.490 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.490 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.490 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:05.490 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:05.490 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:05.490 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.490 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.490 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:05.490 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@18 -- # shift 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.491 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.492 10:07:36 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@18 -- # shift 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:05.493 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:16:05.494 10:07:36 nvme_scc -- scripts/common.sh@18 -- # local i 00:16:05.494 10:07:36 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:16:05.494 10:07:36 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:16:05.494 10:07:36 nvme_scc -- scripts/common.sh@27 -- # return 0 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@18 -- # shift 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:16:05.494 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:16:05.495 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:16:05.496 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:16:05.497 10:07:36 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:16:05.497 10:07:36 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:16:05.498 10:07:36 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:16:05.498 10:07:36 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:16:05.498 10:07:36 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:16:05.498 10:07:36 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:16:05.498 10:07:36 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:16:05.498 10:07:36 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:16:05.498 10:07:36 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:16:05.498 10:07:36 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:16:05.498 10:07:36 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:16:05.498 10:07:36 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:16:05.498 10:07:36 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:16:05.498 10:07:36 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:16:05.498 10:07:36 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:16:05.498 10:07:36 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:16:05.498 10:07:36 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:16:05.498 10:07:36 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:16:05.498 10:07:36 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:16:05.498 10:07:36 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:16:05.498 10:07:36 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:16:05.498 10:07:36 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:16:05.498 10:07:36 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:16:05.498 10:07:36 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:16:05.498 10:07:36 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:06.064 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:06.631 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:06.631 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:06.631 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:06.631 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:06.889 10:07:37 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:16:06.889 10:07:37 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:06.889 10:07:37 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:06.889 10:07:37 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:16:06.889 ************************************ 00:16:06.889 START TEST nvme_simple_copy 00:16:06.889 ************************************ 00:16:06.889 10:07:37 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:16:07.147 Initializing NVMe Controllers 00:16:07.147 Attaching to 0000:00:10.0 00:16:07.147 Controller supports SCC. Attached to 0000:00:10.0 00:16:07.147 Namespace ID: 1 size: 6GB 00:16:07.147 Initialization complete. 00:16:07.147 00:16:07.147 Controller QEMU NVMe Ctrl (12340 ) 00:16:07.147 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:16:07.147 Namespace Block Size:4096 00:16:07.147 Writing LBAs 0 to 63 with Random Data 00:16:07.147 Copied LBAs from 0 - 63 to the Destination LBA 256 00:16:07.147 LBAs matching Written Data: 64 00:16:07.147 00:16:07.147 real 0m0.349s 00:16:07.147 user 0m0.137s 00:16:07.147 sys 0m0.110s 00:16:07.147 10:07:37 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:07.147 ************************************ 00:16:07.147 END TEST nvme_simple_copy 00:16:07.147 ************************************ 00:16:07.147 10:07:37 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:16:07.147 00:16:07.147 real 0m8.427s 00:16:07.147 user 0m1.563s 00:16:07.147 sys 0m1.833s 00:16:07.147 10:07:37 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:07.147 ************************************ 00:16:07.147 END TEST nvme_scc 00:16:07.147 ************************************ 00:16:07.147 10:07:37 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:16:07.147 10:07:37 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:16:07.147 10:07:37 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:16:07.147 10:07:37 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:16:07.147 10:07:37 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:16:07.147 10:07:37 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:16:07.147 10:07:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:07.147 10:07:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:07.147 10:07:37 -- common/autotest_common.sh@10 -- # set +x 00:16:07.147 ************************************ 00:16:07.147 START TEST nvme_fdp 00:16:07.147 ************************************ 00:16:07.147 10:07:37 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:16:07.497 * Looking for test storage... 00:16:07.497 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:16:07.497 10:07:37 nvme_fdp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:07.497 10:07:37 nvme_fdp -- common/autotest_common.sh@1711 -- # lcov --version 00:16:07.497 10:07:37 nvme_fdp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:07.497 10:07:38 nvme_fdp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:07.497 10:07:38 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:07.497 10:07:38 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:07.497 10:07:38 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:07.497 10:07:38 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:16:07.497 10:07:38 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:16:07.497 10:07:38 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:16:07.497 10:07:38 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:16:07.497 10:07:38 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:16:07.497 10:07:38 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:16:07.497 10:07:38 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:16:07.497 10:07:38 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:07.497 10:07:38 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:16:07.497 10:07:38 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:16:07.497 10:07:38 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:07.497 10:07:38 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:07.497 10:07:38 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:16:07.497 10:07:38 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:16:07.497 10:07:38 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:07.497 10:07:38 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:16:07.497 10:07:38 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:16:07.497 10:07:38 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:16:07.497 10:07:38 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:16:07.497 10:07:38 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:07.497 10:07:38 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:16:07.497 10:07:38 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:16:07.497 10:07:38 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:07.497 10:07:38 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:07.497 10:07:38 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:16:07.497 10:07:38 nvme_fdp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:07.497 10:07:38 nvme_fdp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:07.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:07.497 --rc genhtml_branch_coverage=1 00:16:07.497 --rc genhtml_function_coverage=1 00:16:07.497 --rc genhtml_legend=1 00:16:07.497 --rc geninfo_all_blocks=1 00:16:07.497 --rc geninfo_unexecuted_blocks=1 00:16:07.497 00:16:07.497 ' 00:16:07.497 10:07:38 nvme_fdp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:07.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:07.497 --rc genhtml_branch_coverage=1 00:16:07.497 --rc genhtml_function_coverage=1 00:16:07.497 --rc genhtml_legend=1 00:16:07.497 --rc geninfo_all_blocks=1 00:16:07.497 --rc geninfo_unexecuted_blocks=1 00:16:07.497 00:16:07.497 ' 00:16:07.497 10:07:38 nvme_fdp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:07.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:07.497 --rc genhtml_branch_coverage=1 00:16:07.497 --rc genhtml_function_coverage=1 00:16:07.497 --rc genhtml_legend=1 00:16:07.497 --rc geninfo_all_blocks=1 00:16:07.497 --rc geninfo_unexecuted_blocks=1 00:16:07.497 00:16:07.497 ' 00:16:07.497 10:07:38 nvme_fdp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:07.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:07.497 --rc genhtml_branch_coverage=1 00:16:07.497 --rc genhtml_function_coverage=1 00:16:07.497 --rc genhtml_legend=1 00:16:07.497 --rc geninfo_all_blocks=1 00:16:07.497 --rc geninfo_unexecuted_blocks=1 00:16:07.497 00:16:07.497 ' 00:16:07.498 10:07:38 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:16:07.498 10:07:38 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:16:07.498 10:07:38 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:16:07.498 10:07:38 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:07.498 10:07:38 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:07.498 10:07:38 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:16:07.498 10:07:38 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:07.498 10:07:38 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:07.498 10:07:38 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:07.498 10:07:38 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.498 10:07:38 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.498 10:07:38 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.498 10:07:38 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:16:07.498 10:07:38 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.498 10:07:38 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:16:07.498 10:07:38 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:16:07.498 10:07:38 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:16:07.498 10:07:38 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:16:07.498 10:07:38 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:16:07.498 10:07:38 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:16:07.498 10:07:38 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:16:07.498 10:07:38 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:16:07.498 10:07:38 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:16:07.498 10:07:38 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:07.498 10:07:38 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:07.756 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:08.014 Waiting for block devices as requested 00:16:08.014 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:08.014 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:08.273 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:16:08.273 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:16:13.540 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:16:13.540 10:07:44 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:16:13.540 10:07:44 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:16:13.540 10:07:44 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:16:13.540 10:07:44 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:16:13.540 10:07:44 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:16:13.540 10:07:44 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:16:13.540 10:07:44 nvme_fdp -- scripts/common.sh@18 -- # local i 00:16:13.540 10:07:44 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:16:13.540 10:07:44 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:16:13.540 10:07:44 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:16:13.540 10:07:44 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:16:13.540 10:07:44 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:16:13.540 10:07:44 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:16:13.540 10:07:44 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:13.540 10:07:44 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:16:13.540 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.540 10:07:44 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:16:13.540 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.540 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:13.540 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.540 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.540 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:16:13.540 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:16:13.540 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:16:13.540 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.540 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.540 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:16:13.540 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:16:13.540 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:16:13.540 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.540 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.540 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:16:13.540 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:16:13.540 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:16:13.540 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.540 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.540 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:16:13.540 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:16:13.540 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:16:13.540 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.540 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.540 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:16:13.540 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:16:13.540 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:16:13.540 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.540 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.540 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:16:13.540 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:16:13.540 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:16:13.540 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.540 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.540 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:16:13.540 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:16:13.540 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:16:13.540 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.540 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.540 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.540 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:16:13.540 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:16:13.540 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.540 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.540 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:13.540 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:16:13.540 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:16:13.540 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.540 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.540 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.540 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:16:13.540 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:16:13.540 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.540 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.540 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:16:13.540 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:16:13.540 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:16:13.540 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:16:13.541 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.542 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:16:13.543 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:13.544 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.545 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:16:13.546 10:07:44 nvme_fdp -- scripts/common.sh@18 -- # local i 00:16:13.546 10:07:44 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:16:13.546 10:07:44 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:16:13.546 10:07:44 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.546 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.547 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.548 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.549 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:16:13.550 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:13.551 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:16:13.552 10:07:44 nvme_fdp -- scripts/common.sh@18 -- # local i 00:16:13.552 10:07:44 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:16:13.552 10:07:44 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:16:13.552 10:07:44 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.552 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:16:13.818 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:16:13.818 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:16:13.818 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.818 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.818 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:16:13.818 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:16:13.818 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:16:13.818 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.818 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.818 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:16:13.818 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:16:13.818 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:16:13.818 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.818 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.818 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:16:13.818 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:16:13.818 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:16:13.818 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.818 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.818 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.818 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:16:13.818 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:16:13.818 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.818 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.818 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:13.818 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:16:13.818 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:16:13.818 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.818 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.818 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.818 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:16:13.818 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:16:13.818 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.818 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.818 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:16:13.818 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:16:13.818 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:16:13.818 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.818 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.818 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.818 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:16:13.818 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:16:13.818 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.818 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.818 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.818 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:16:13.818 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:16:13.818 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.818 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.818 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:16:13.818 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:16:13.818 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:16:13.818 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.818 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.818 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:16:13.818 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:16:13.818 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:16:13.818 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.818 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.818 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.818 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.819 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:16:13.820 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.821 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:13.822 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.823 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.824 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.825 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.826 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:16:13.827 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:13.828 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:16:13.829 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:16:13.830 10:07:44 nvme_fdp -- scripts/common.sh@18 -- # local i 00:16:13.830 10:07:44 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:16:13.830 10:07:44 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:16:13.830 10:07:44 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.830 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.831 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.832 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:16:13.833 10:07:44 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:16:13.833 10:07:44 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:16:14.091 10:07:44 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:16:14.091 10:07:44 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:16:14.091 10:07:44 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:16:14.091 10:07:44 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:16:14.091 10:07:44 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:16:14.091 10:07:44 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:16:14.091 10:07:44 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:16:14.091 10:07:44 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:16:14.091 10:07:44 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:16:14.091 10:07:44 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:16:14.091 10:07:44 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:16:14.091 10:07:44 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:16:14.091 10:07:44 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:16:14.091 10:07:44 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:16:14.091 10:07:44 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:16:14.091 10:07:44 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:16:14.091 10:07:44 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:16:14.091 10:07:44 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:16:14.091 10:07:44 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:16:14.091 10:07:44 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:16:14.091 10:07:44 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:16:14.091 10:07:44 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:16:14.091 10:07:44 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:16:14.091 10:07:44 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:16:14.091 10:07:44 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:16:14.091 10:07:44 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:16:14.091 10:07:44 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:16:14.091 10:07:44 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:16:14.091 10:07:44 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:16:14.092 10:07:44 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:16:14.092 10:07:44 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:16:14.092 10:07:44 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:16:14.092 10:07:44 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:16:14.092 10:07:44 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:16:14.092 10:07:44 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:16:14.092 10:07:44 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:16:14.092 10:07:44 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:16:14.092 10:07:44 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:16:14.092 10:07:44 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:16:14.092 10:07:44 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:16:14.092 10:07:44 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:16:14.092 10:07:44 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:16:14.092 10:07:44 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:16:14.092 10:07:44 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:16:14.092 10:07:44 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:16:14.092 10:07:44 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:16:14.092 10:07:44 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:16:14.092 10:07:44 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:16:14.092 10:07:44 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:16:14.092 10:07:44 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:16:14.092 10:07:44 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:16:14.092 10:07:44 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:16:14.092 10:07:44 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:16:14.092 10:07:44 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:16:14.092 10:07:44 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:16:14.092 10:07:44 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:16:14.092 10:07:44 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:16:14.092 10:07:44 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:16:14.092 10:07:44 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:16:14.092 10:07:44 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:16:14.092 10:07:44 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:14.421 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:14.996 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:14.997 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:14.997 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:15.255 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:15.255 10:07:45 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:16:15.255 10:07:45 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:15.255 10:07:45 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:15.255 10:07:45 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:16:15.255 ************************************ 00:16:15.255 START TEST nvme_flexible_data_placement 00:16:15.255 ************************************ 00:16:15.255 10:07:45 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:16:15.513 Initializing NVMe Controllers 00:16:15.513 Attaching to 0000:00:13.0 00:16:15.513 Controller supports FDP Attached to 0000:00:13.0 00:16:15.513 Namespace ID: 1 Endurance Group ID: 1 00:16:15.513 Initialization complete. 00:16:15.513 00:16:15.513 ================================== 00:16:15.513 == FDP tests for Namespace: #01 == 00:16:15.513 ================================== 00:16:15.513 00:16:15.513 Get Feature: FDP: 00:16:15.513 ================= 00:16:15.513 Enabled: Yes 00:16:15.513 FDP configuration Index: 0 00:16:15.513 00:16:15.513 FDP configurations log page 00:16:15.513 =========================== 00:16:15.513 Number of FDP configurations: 1 00:16:15.513 Version: 0 00:16:15.513 Size: 112 00:16:15.513 FDP Configuration Descriptor: 0 00:16:15.513 Descriptor Size: 96 00:16:15.513 Reclaim Group Identifier format: 2 00:16:15.513 FDP Volatile Write Cache: Not Present 00:16:15.513 FDP Configuration: Valid 00:16:15.513 Vendor Specific Size: 0 00:16:15.513 Number of Reclaim Groups: 2 00:16:15.513 Number of Recalim Unit Handles: 8 00:16:15.513 Max Placement Identifiers: 128 00:16:15.513 Number of Namespaces Suppprted: 256 00:16:15.513 Reclaim unit Nominal Size: 6000000 bytes 00:16:15.513 Estimated Reclaim Unit Time Limit: Not Reported 00:16:15.513 RUH Desc #000: RUH Type: Initially Isolated 00:16:15.513 RUH Desc #001: RUH Type: Initially Isolated 00:16:15.513 RUH Desc #002: RUH Type: Initially Isolated 00:16:15.513 RUH Desc #003: RUH Type: Initially Isolated 00:16:15.513 RUH Desc #004: RUH Type: Initially Isolated 00:16:15.513 RUH Desc #005: RUH Type: Initially Isolated 00:16:15.513 RUH Desc #006: RUH Type: Initially Isolated 00:16:15.513 RUH Desc #007: RUH Type: Initially Isolated 00:16:15.513 00:16:15.513 FDP reclaim unit handle usage log page 00:16:15.513 ====================================== 00:16:15.513 Number of Reclaim Unit Handles: 8 00:16:15.513 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:16:15.513 RUH Usage Desc #001: RUH Attributes: Unused 00:16:15.513 RUH Usage Desc #002: RUH Attributes: Unused 00:16:15.513 RUH Usage Desc #003: RUH Attributes: Unused 00:16:15.513 RUH Usage Desc #004: RUH Attributes: Unused 00:16:15.513 RUH Usage Desc #005: RUH Attributes: Unused 00:16:15.513 RUH Usage Desc #006: RUH Attributes: Unused 00:16:15.513 RUH Usage Desc #007: RUH Attributes: Unused 00:16:15.513 00:16:15.513 FDP statistics log page 00:16:15.513 ======================= 00:16:15.513 Host bytes with metadata written: 829620224 00:16:15.513 Media bytes with metadata written: 829706240 00:16:15.513 Media bytes erased: 0 00:16:15.513 00:16:15.513 FDP Reclaim unit handle status 00:16:15.513 ============================== 00:16:15.513 Number of RUHS descriptors: 2 00:16:15.513 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x00000000000048d0 00:16:15.513 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:16:15.513 00:16:15.513 FDP write on placement id: 0 success 00:16:15.513 00:16:15.513 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:16:15.513 00:16:15.513 IO mgmt send: RUH update for Placement ID: #0 Success 00:16:15.513 00:16:15.513 Get Feature: FDP Events for Placement handle: #0 00:16:15.513 ======================== 00:16:15.513 Number of FDP Events: 6 00:16:15.513 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:16:15.513 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:16:15.513 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:16:15.513 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:16:15.513 FDP Event: #4 Type: Media Reallocated Enabled: No 00:16:15.513 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:16:15.513 00:16:15.513 FDP events log page 00:16:15.513 =================== 00:16:15.513 Number of FDP events: 1 00:16:15.513 FDP Event #0: 00:16:15.513 Event Type: RU Not Written to Capacity 00:16:15.513 Placement Identifier: Valid 00:16:15.513 NSID: Valid 00:16:15.513 Location: Valid 00:16:15.513 Placement Identifier: 0 00:16:15.513 Event Timestamp: 8 00:16:15.513 Namespace Identifier: 1 00:16:15.513 Reclaim Group Identifier: 0 00:16:15.513 Reclaim Unit Handle Identifier: 0 00:16:15.513 00:16:15.513 FDP test passed 00:16:15.513 00:16:15.513 real 0m0.314s 00:16:15.513 user 0m0.106s 00:16:15.513 sys 0m0.106s 00:16:15.513 10:07:46 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:15.513 ************************************ 00:16:15.513 10:07:46 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:16:15.513 END TEST nvme_flexible_data_placement 00:16:15.513 ************************************ 00:16:15.513 00:16:15.513 real 0m8.351s 00:16:15.513 user 0m1.562s 00:16:15.513 sys 0m1.779s 00:16:15.513 10:07:46 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:15.513 10:07:46 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:16:15.513 ************************************ 00:16:15.513 END TEST nvme_fdp 00:16:15.513 ************************************ 00:16:15.513 10:07:46 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:16:15.513 10:07:46 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:16:15.513 10:07:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:15.513 10:07:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:15.513 10:07:46 -- common/autotest_common.sh@10 -- # set +x 00:16:15.513 ************************************ 00:16:15.513 START TEST nvme_rpc 00:16:15.513 ************************************ 00:16:15.513 10:07:46 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:16:15.771 * Looking for test storage... 00:16:15.771 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:16:15.771 10:07:46 nvme_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:15.771 10:07:46 nvme_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:16:15.771 10:07:46 nvme_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:15.771 10:07:46 nvme_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:15.771 10:07:46 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:15.771 10:07:46 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:15.771 10:07:46 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:15.771 10:07:46 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:16:15.771 10:07:46 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:16:15.771 10:07:46 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:16:15.771 10:07:46 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:16:15.771 10:07:46 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:16:15.771 10:07:46 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:16:15.771 10:07:46 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:16:15.771 10:07:46 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:15.771 10:07:46 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:16:15.771 10:07:46 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:16:15.771 10:07:46 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:15.771 10:07:46 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:15.771 10:07:46 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:16:15.771 10:07:46 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:16:15.771 10:07:46 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:15.771 10:07:46 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:16:15.771 10:07:46 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:15.772 10:07:46 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:16:15.772 10:07:46 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:16:15.772 10:07:46 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:15.772 10:07:46 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:16:15.772 10:07:46 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:15.772 10:07:46 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:15.772 10:07:46 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:15.772 10:07:46 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:16:15.772 10:07:46 nvme_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:15.772 10:07:46 nvme_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:15.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.772 --rc genhtml_branch_coverage=1 00:16:15.772 --rc genhtml_function_coverage=1 00:16:15.772 --rc genhtml_legend=1 00:16:15.772 --rc geninfo_all_blocks=1 00:16:15.772 --rc geninfo_unexecuted_blocks=1 00:16:15.772 00:16:15.772 ' 00:16:15.772 10:07:46 nvme_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:15.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.772 --rc genhtml_branch_coverage=1 00:16:15.772 --rc genhtml_function_coverage=1 00:16:15.772 --rc genhtml_legend=1 00:16:15.772 --rc geninfo_all_blocks=1 00:16:15.772 --rc geninfo_unexecuted_blocks=1 00:16:15.772 00:16:15.772 ' 00:16:15.772 10:07:46 nvme_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:15.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.772 --rc genhtml_branch_coverage=1 00:16:15.772 --rc genhtml_function_coverage=1 00:16:15.772 --rc genhtml_legend=1 00:16:15.772 --rc geninfo_all_blocks=1 00:16:15.772 --rc geninfo_unexecuted_blocks=1 00:16:15.772 00:16:15.772 ' 00:16:15.772 10:07:46 nvme_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:15.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.772 --rc genhtml_branch_coverage=1 00:16:15.772 --rc genhtml_function_coverage=1 00:16:15.772 --rc genhtml_legend=1 00:16:15.772 --rc geninfo_all_blocks=1 00:16:15.772 --rc geninfo_unexecuted_blocks=1 00:16:15.772 00:16:15.772 ' 00:16:15.772 10:07:46 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:15.772 10:07:46 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:16:15.772 10:07:46 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:16:15.772 10:07:46 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:16:15.772 10:07:46 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:16:15.772 10:07:46 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:16:15.772 10:07:46 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:16:15.772 10:07:46 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:16:15.772 10:07:46 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:16:15.772 10:07:46 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:15.772 10:07:46 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:16:15.772 10:07:46 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:16:15.772 10:07:46 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:16:15.772 10:07:46 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:16:15.772 10:07:46 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:16:15.772 10:07:46 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67641 00:16:16.029 10:07:46 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:16:16.029 10:07:46 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:16:16.029 10:07:46 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67641 00:16:16.029 10:07:46 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 67641 ']' 00:16:16.029 10:07:46 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:16.029 10:07:46 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:16.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:16.029 10:07:46 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:16.029 10:07:46 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:16.029 10:07:46 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.029 [2024-12-09 10:07:46.699986] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:16:16.029 [2024-12-09 10:07:46.700880] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67641 ] 00:16:16.287 [2024-12-09 10:07:46.892775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:16.287 [2024-12-09 10:07:47.062257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:16.287 [2024-12-09 10:07:47.062257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:17.659 10:07:48 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:17.659 10:07:48 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:17.659 10:07:48 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:16:17.659 Nvme0n1 00:16:17.659 10:07:48 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:16:17.659 10:07:48 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:16:17.918 request: 00:16:17.918 { 00:16:17.918 "bdev_name": "Nvme0n1", 00:16:17.918 "filename": "non_existing_file", 00:16:17.918 "method": "bdev_nvme_apply_firmware", 00:16:17.918 "req_id": 1 00:16:17.918 } 00:16:17.918 Got JSON-RPC error response 00:16:17.918 response: 00:16:17.918 { 00:16:17.918 "code": -32603, 00:16:17.918 "message": "open file failed." 00:16:17.918 } 00:16:17.918 10:07:48 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:16:17.918 10:07:48 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:16:17.918 10:07:48 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:16:18.175 10:07:48 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:16:18.175 10:07:48 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 67641 00:16:18.175 10:07:48 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 67641 ']' 00:16:18.175 10:07:48 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 67641 00:16:18.175 10:07:48 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:16:18.175 10:07:48 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:18.175 10:07:48 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67641 00:16:18.434 killing process with pid 67641 00:16:18.434 10:07:48 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:18.434 10:07:48 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:18.434 10:07:48 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67641' 00:16:18.434 10:07:48 nvme_rpc -- common/autotest_common.sh@973 -- # kill 67641 00:16:18.434 10:07:48 nvme_rpc -- common/autotest_common.sh@978 -- # wait 67641 00:16:20.963 ************************************ 00:16:20.963 END TEST nvme_rpc 00:16:20.963 ************************************ 00:16:20.963 00:16:20.963 real 0m5.070s 00:16:20.963 user 0m9.459s 00:16:20.963 sys 0m0.892s 00:16:20.963 10:07:51 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:20.963 10:07:51 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:20.963 10:07:51 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:16:20.963 10:07:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:20.963 10:07:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:20.963 10:07:51 -- common/autotest_common.sh@10 -- # set +x 00:16:20.963 ************************************ 00:16:20.963 START TEST nvme_rpc_timeouts 00:16:20.963 ************************************ 00:16:20.963 10:07:51 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:16:20.963 * Looking for test storage... 00:16:20.963 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:16:20.963 10:07:51 nvme_rpc_timeouts -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:20.963 10:07:51 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:20.963 10:07:51 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lcov --version 00:16:20.963 10:07:51 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:20.963 10:07:51 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:20.963 10:07:51 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:20.963 10:07:51 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:20.963 10:07:51 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:16:20.963 10:07:51 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:16:20.963 10:07:51 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:16:20.963 10:07:51 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:16:20.963 10:07:51 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:16:20.963 10:07:51 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:16:20.963 10:07:51 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:16:20.963 10:07:51 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:20.963 10:07:51 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:16:20.963 10:07:51 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:16:20.963 10:07:51 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:20.963 10:07:51 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:20.963 10:07:51 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:16:20.963 10:07:51 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:16:20.963 10:07:51 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:20.963 10:07:51 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:16:20.963 10:07:51 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:16:20.963 10:07:51 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:16:20.963 10:07:51 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:16:20.963 10:07:51 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:20.963 10:07:51 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:16:20.963 10:07:51 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:16:20.963 10:07:51 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:20.963 10:07:51 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:20.963 10:07:51 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:16:20.963 10:07:51 nvme_rpc_timeouts -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:20.963 10:07:51 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:20.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.963 --rc genhtml_branch_coverage=1 00:16:20.963 --rc genhtml_function_coverage=1 00:16:20.963 --rc genhtml_legend=1 00:16:20.963 --rc geninfo_all_blocks=1 00:16:20.963 --rc geninfo_unexecuted_blocks=1 00:16:20.963 00:16:20.963 ' 00:16:20.963 10:07:51 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:20.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.963 --rc genhtml_branch_coverage=1 00:16:20.963 --rc genhtml_function_coverage=1 00:16:20.963 --rc genhtml_legend=1 00:16:20.963 --rc geninfo_all_blocks=1 00:16:20.963 --rc geninfo_unexecuted_blocks=1 00:16:20.963 00:16:20.963 ' 00:16:20.963 10:07:51 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:20.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.963 --rc genhtml_branch_coverage=1 00:16:20.963 --rc genhtml_function_coverage=1 00:16:20.963 --rc genhtml_legend=1 00:16:20.963 --rc geninfo_all_blocks=1 00:16:20.963 --rc geninfo_unexecuted_blocks=1 00:16:20.963 00:16:20.963 ' 00:16:20.963 10:07:51 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:20.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.963 --rc genhtml_branch_coverage=1 00:16:20.963 --rc genhtml_function_coverage=1 00:16:20.963 --rc genhtml_legend=1 00:16:20.963 --rc geninfo_all_blocks=1 00:16:20.963 --rc geninfo_unexecuted_blocks=1 00:16:20.963 00:16:20.963 ' 00:16:20.963 10:07:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:20.963 10:07:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67722 00:16:20.963 10:07:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67722 00:16:20.963 10:07:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67760 00:16:20.963 10:07:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:16:20.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:20.963 10:07:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:16:20.963 10:07:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67760 00:16:20.963 10:07:51 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 67760 ']' 00:16:20.963 10:07:51 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.964 10:07:51 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:20.964 10:07:51 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.964 10:07:51 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:20.964 10:07:51 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:16:20.964 [2024-12-09 10:07:51.739168] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:16:20.964 [2024-12-09 10:07:51.739365] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67760 ] 00:16:21.222 [2024-12-09 10:07:51.925632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:21.544 [2024-12-09 10:07:52.081438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:21.544 [2024-12-09 10:07:52.081448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:22.480 10:07:53 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:22.480 Checking default timeout settings: 00:16:22.480 10:07:53 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:16:22.480 10:07:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:16:22.480 10:07:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:16:22.738 Making settings changes with rpc: 00:16:22.738 10:07:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:16:22.738 10:07:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:16:22.997 Check default vs. modified settings: 00:16:22.997 10:07:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:16:22.997 10:07:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:16:23.565 10:07:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:16:23.565 10:07:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:16:23.565 10:07:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67722 00:16:23.565 10:07:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:16:23.565 10:07:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:16:23.565 10:07:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:16:23.565 10:07:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67722 00:16:23.565 10:07:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:16:23.565 10:07:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:16:23.565 Setting action_on_timeout is changed as expected. 00:16:23.565 10:07:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:16:23.565 10:07:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:16:23.565 10:07:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:16:23.565 10:07:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:16:23.565 10:07:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67722 00:16:23.565 10:07:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:16:23.565 10:07:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:16:23.565 10:07:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:16:23.565 10:07:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67722 00:16:23.565 10:07:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:16:23.565 10:07:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:16:23.565 10:07:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:16:23.565 10:07:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:16:23.565 Setting timeout_us is changed as expected. 00:16:23.565 10:07:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:16:23.565 10:07:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:16:23.565 10:07:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67722 00:16:23.565 10:07:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:16:23.565 10:07:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:16:23.565 10:07:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:16:23.565 10:07:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67722 00:16:23.565 10:07:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:16:23.565 10:07:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:16:23.565 Setting timeout_admin_us is changed as expected. 00:16:23.565 10:07:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:16:23.565 10:07:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:16:23.565 10:07:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:16:23.565 10:07:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:16:23.565 10:07:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67722 /tmp/settings_modified_67722 00:16:23.565 10:07:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67760 00:16:23.565 10:07:54 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 67760 ']' 00:16:23.565 10:07:54 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 67760 00:16:23.565 10:07:54 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:16:23.565 10:07:54 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:23.565 10:07:54 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67760 00:16:23.565 killing process with pid 67760 00:16:23.565 10:07:54 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:23.565 10:07:54 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:23.565 10:07:54 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67760' 00:16:23.565 10:07:54 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 67760 00:16:23.565 10:07:54 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 67760 00:16:26.103 RPC TIMEOUT SETTING TEST PASSED. 00:16:26.103 10:07:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:16:26.103 00:16:26.103 real 0m5.316s 00:16:26.103 user 0m10.127s 00:16:26.103 sys 0m0.898s 00:16:26.103 ************************************ 00:16:26.103 END TEST nvme_rpc_timeouts 00:16:26.103 ************************************ 00:16:26.103 10:07:56 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:26.103 10:07:56 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:16:26.103 10:07:56 -- spdk/autotest.sh@239 -- # uname -s 00:16:26.103 10:07:56 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:16:26.103 10:07:56 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:16:26.103 10:07:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:26.103 10:07:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:26.103 10:07:56 -- common/autotest_common.sh@10 -- # set +x 00:16:26.103 ************************************ 00:16:26.103 START TEST sw_hotplug 00:16:26.103 ************************************ 00:16:26.103 10:07:56 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:16:26.103 * Looking for test storage... 00:16:26.103 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:16:26.103 10:07:56 sw_hotplug -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:26.103 10:07:56 sw_hotplug -- common/autotest_common.sh@1711 -- # lcov --version 00:16:26.103 10:07:56 sw_hotplug -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:26.398 10:07:56 sw_hotplug -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:26.398 10:07:56 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:26.398 10:07:56 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:26.398 10:07:56 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:26.398 10:07:56 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:16:26.398 10:07:56 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:16:26.398 10:07:56 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:16:26.398 10:07:56 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:16:26.398 10:07:56 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:16:26.398 10:07:56 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:16:26.398 10:07:56 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:16:26.398 10:07:56 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:26.398 10:07:56 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:16:26.398 10:07:56 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:16:26.398 10:07:56 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:26.398 10:07:56 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:26.398 10:07:56 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:16:26.398 10:07:56 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:16:26.398 10:07:56 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:26.398 10:07:56 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:16:26.398 10:07:56 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:16:26.398 10:07:56 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:16:26.398 10:07:56 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:16:26.398 10:07:56 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:26.398 10:07:56 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:16:26.398 10:07:56 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:16:26.398 10:07:56 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:26.398 10:07:56 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:26.398 10:07:56 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:16:26.398 10:07:56 sw_hotplug -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:26.398 10:07:56 sw_hotplug -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:26.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.398 --rc genhtml_branch_coverage=1 00:16:26.398 --rc genhtml_function_coverage=1 00:16:26.398 --rc genhtml_legend=1 00:16:26.398 --rc geninfo_all_blocks=1 00:16:26.398 --rc geninfo_unexecuted_blocks=1 00:16:26.398 00:16:26.398 ' 00:16:26.398 10:07:56 sw_hotplug -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:26.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.398 --rc genhtml_branch_coverage=1 00:16:26.398 --rc genhtml_function_coverage=1 00:16:26.398 --rc genhtml_legend=1 00:16:26.398 --rc geninfo_all_blocks=1 00:16:26.398 --rc geninfo_unexecuted_blocks=1 00:16:26.398 00:16:26.398 ' 00:16:26.398 10:07:56 sw_hotplug -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:26.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.399 --rc genhtml_branch_coverage=1 00:16:26.399 --rc genhtml_function_coverage=1 00:16:26.399 --rc genhtml_legend=1 00:16:26.399 --rc geninfo_all_blocks=1 00:16:26.399 --rc geninfo_unexecuted_blocks=1 00:16:26.399 00:16:26.399 ' 00:16:26.399 10:07:56 sw_hotplug -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:26.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.399 --rc genhtml_branch_coverage=1 00:16:26.399 --rc genhtml_function_coverage=1 00:16:26.399 --rc genhtml_legend=1 00:16:26.399 --rc geninfo_all_blocks=1 00:16:26.399 --rc geninfo_unexecuted_blocks=1 00:16:26.399 00:16:26.399 ' 00:16:26.399 10:07:56 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:26.657 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:26.916 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:26.916 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:26.916 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:26.916 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:26.916 10:07:57 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:16:26.916 10:07:57 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:16:26.916 10:07:57 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:16:26.916 10:07:57 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:16:26.916 10:07:57 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:16:26.916 10:07:57 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:16:26.916 10:07:57 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:16:26.916 10:07:57 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:16:26.916 10:07:57 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:16:26.916 10:07:57 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:16:26.916 10:07:57 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:16:26.916 10:07:57 sw_hotplug -- scripts/common.sh@233 -- # local class 00:16:26.916 10:07:57 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:16:26.916 10:07:57 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:16:26.916 10:07:57 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:16:26.916 10:07:57 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:16:26.916 10:07:57 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:16:26.916 10:07:57 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:16:26.916 10:07:57 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:16:26.916 10:07:57 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:16:26.916 10:07:57 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:16:26.916 10:07:57 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:16:26.916 10:07:57 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:16:26.916 10:07:57 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:16:26.916 10:07:57 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:16:26.916 10:07:57 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:16:26.916 10:07:57 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:16:26.916 10:07:57 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:16:26.916 10:07:57 sw_hotplug -- scripts/common.sh@18 -- # local i 00:16:26.916 10:07:57 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:16:26.916 10:07:57 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:16:26.916 10:07:57 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:16:26.916 10:07:57 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:16:26.916 10:07:57 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:16:26.916 10:07:57 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:16:26.916 10:07:57 sw_hotplug -- scripts/common.sh@18 -- # local i 00:16:26.916 10:07:57 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:16:26.916 10:07:57 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:16:26.916 10:07:57 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:16:26.916 10:07:57 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:16:26.916 10:07:57 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:16:26.916 10:07:57 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:16:26.916 10:07:57 sw_hotplug -- scripts/common.sh@18 -- # local i 00:16:26.916 10:07:57 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:16:26.916 10:07:57 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:16:26.916 10:07:57 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:16:26.916 10:07:57 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:16:26.916 10:07:57 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:16:26.916 10:07:57 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:16:26.916 10:07:57 sw_hotplug -- scripts/common.sh@18 -- # local i 00:16:26.916 10:07:57 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:16:26.916 10:07:57 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:16:26.916 10:07:57 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:16:26.916 10:07:57 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:16:26.916 10:07:57 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:16:26.916 10:07:57 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:16:26.916 10:07:57 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:16:26.916 10:07:57 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:16:26.916 10:07:57 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:16:26.916 10:07:57 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:16:26.916 10:07:57 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:16:26.916 10:07:57 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:16:26.916 10:07:57 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:16:26.916 10:07:57 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:16:26.916 10:07:57 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:16:26.916 10:07:57 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:16:26.916 10:07:57 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:16:26.916 10:07:57 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:16:26.916 10:07:57 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:16:26.916 10:07:57 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:16:26.916 10:07:57 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:16:26.916 10:07:57 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:16:26.916 10:07:57 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:16:26.916 10:07:57 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:16:26.916 10:07:57 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:16:26.916 10:07:57 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:16:26.916 10:07:57 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:16:26.916 10:07:57 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:16:26.916 10:07:57 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:27.175 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:27.433 Waiting for block devices as requested 00:16:27.433 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:27.433 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:27.692 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:16:27.692 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:16:32.960 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:16:32.960 10:08:03 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:16:32.960 10:08:03 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:33.218 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:16:33.218 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:33.218 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:16:33.784 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:16:33.784 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:33.784 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:34.042 10:08:04 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:16:34.042 10:08:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:34.042 10:08:04 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:16:34.042 10:08:04 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:16:34.042 10:08:04 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=68635 00:16:34.042 10:08:04 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:16:34.042 10:08:04 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:16:34.042 10:08:04 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:16:34.042 10:08:04 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:16:34.042 10:08:04 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:16:34.042 10:08:04 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:16:34.042 10:08:04 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:16:34.042 10:08:04 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:16:34.042 10:08:04 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:16:34.042 10:08:04 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:16:34.042 10:08:04 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:16:34.042 10:08:04 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:16:34.042 10:08:04 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:16:34.042 10:08:04 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:16:34.301 Initializing NVMe Controllers 00:16:34.301 Attaching to 0000:00:10.0 00:16:34.301 Attaching to 0000:00:11.0 00:16:34.301 Attached to 0000:00:10.0 00:16:34.301 Attached to 0000:00:11.0 00:16:34.301 Initialization complete. Starting I/O... 00:16:34.301 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:16:34.301 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:16:34.301 00:16:35.237 QEMU NVMe Ctrl (12340 ): 1060 I/Os completed (+1060) 00:16:35.237 QEMU NVMe Ctrl (12341 ): 1150 I/Os completed (+1150) 00:16:35.237 00:16:36.612 QEMU NVMe Ctrl (12340 ): 2408 I/Os completed (+1348) 00:16:36.612 QEMU NVMe Ctrl (12341 ): 2599 I/Os completed (+1449) 00:16:36.612 00:16:37.178 QEMU NVMe Ctrl (12340 ): 3832 I/Os completed (+1424) 00:16:37.178 QEMU NVMe Ctrl (12341 ): 4187 I/Os completed (+1588) 00:16:37.178 00:16:38.554 QEMU NVMe Ctrl (12340 ): 5325 I/Os completed (+1493) 00:16:38.554 QEMU NVMe Ctrl (12341 ): 5797 I/Os completed (+1610) 00:16:38.554 00:16:39.489 QEMU NVMe Ctrl (12340 ): 6805 I/Os completed (+1480) 00:16:39.489 QEMU NVMe Ctrl (12341 ): 7534 I/Os completed (+1737) 00:16:39.489 00:16:40.056 10:08:10 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:40.056 10:08:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:40.056 10:08:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:40.056 [2024-12-09 10:08:10.727775] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:16:40.056 Controller removed: QEMU NVMe Ctrl (12340 ) 00:16:40.056 [2024-12-09 10:08:10.729916] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:40.056 [2024-12-09 10:08:10.729998] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:40.056 [2024-12-09 10:08:10.730031] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:40.056 [2024-12-09 10:08:10.730060] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:40.056 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:16:40.056 [2024-12-09 10:08:10.733184] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:40.056 [2024-12-09 10:08:10.733387] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:40.056 [2024-12-09 10:08:10.733426] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:40.056 [2024-12-09 10:08:10.733452] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:40.056 10:08:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:40.056 10:08:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:40.056 [2024-12-09 10:08:10.756684] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:16:40.056 Controller removed: QEMU NVMe Ctrl (12341 ) 00:16:40.056 [2024-12-09 10:08:10.758842] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:40.056 [2024-12-09 10:08:10.758903] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:40.056 [2024-12-09 10:08:10.758944] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:40.056 [2024-12-09 10:08:10.758969] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:40.056 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:16:40.056 [2024-12-09 10:08:10.761734] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:40.056 [2024-12-09 10:08:10.761788] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:40.056 [2024-12-09 10:08:10.761821] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:40.056 [2024-12-09 10:08:10.761858] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:40.056 10:08:10 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:16:40.056 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:16:40.056 EAL: Scan for (pci) bus failed. 00:16:40.056 10:08:10 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:16:40.315 10:08:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:40.315 10:08:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:40.315 10:08:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:16:40.315 10:08:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:16:40.315 10:08:10 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:40.315 10:08:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:40.315 10:08:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:40.315 10:08:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:16:40.315 Attaching to 0000:00:10.0 00:16:40.315 Attached to 0000:00:10.0 00:16:40.315 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:16:40.315 00:16:40.315 10:08:11 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:16:40.315 10:08:11 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:40.315 10:08:11 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:16:40.315 Attaching to 0000:00:11.0 00:16:40.315 Attached to 0000:00:11.0 00:16:41.251 QEMU NVMe Ctrl (12340 ): 1625 I/Os completed (+1625) 00:16:41.251 QEMU NVMe Ctrl (12341 ): 1454 I/Os completed (+1454) 00:16:41.251 00:16:42.185 QEMU NVMe Ctrl (12340 ): 3294 I/Os completed (+1669) 00:16:42.185 QEMU NVMe Ctrl (12341 ): 3174 I/Os completed (+1720) 00:16:42.185 00:16:43.560 QEMU NVMe Ctrl (12340 ): 4904 I/Os completed (+1610) 00:16:43.560 QEMU NVMe Ctrl (12341 ): 4876 I/Os completed (+1702) 00:16:43.560 00:16:44.492 QEMU NVMe Ctrl (12340 ): 6448 I/Os completed (+1544) 00:16:44.492 QEMU NVMe Ctrl (12341 ): 6515 I/Os completed (+1639) 00:16:44.492 00:16:45.426 QEMU NVMe Ctrl (12340 ): 7999 I/Os completed (+1551) 00:16:45.426 QEMU NVMe Ctrl (12341 ): 8217 I/Os completed (+1702) 00:16:45.426 00:16:46.360 QEMU NVMe Ctrl (12340 ): 9691 I/Os completed (+1692) 00:16:46.360 QEMU NVMe Ctrl (12341 ): 9932 I/Os completed (+1715) 00:16:46.360 00:16:47.296 QEMU NVMe Ctrl (12340 ): 11255 I/Os completed (+1564) 00:16:47.296 QEMU NVMe Ctrl (12341 ): 11551 I/Os completed (+1619) 00:16:47.296 00:16:48.250 QEMU NVMe Ctrl (12340 ): 12783 I/Os completed (+1528) 00:16:48.250 QEMU NVMe Ctrl (12341 ): 13168 I/Os completed (+1617) 00:16:48.250 00:16:49.185 QEMU NVMe Ctrl (12340 ): 14327 I/Os completed (+1544) 00:16:49.185 QEMU NVMe Ctrl (12341 ): 14808 I/Os completed (+1640) 00:16:49.185 00:16:50.557 QEMU NVMe Ctrl (12340 ): 15923 I/Os completed (+1596) 00:16:50.557 QEMU NVMe Ctrl (12341 ): 16477 I/Os completed (+1669) 00:16:50.557 00:16:51.491 QEMU NVMe Ctrl (12340 ): 17347 I/Os completed (+1424) 00:16:51.491 QEMU NVMe Ctrl (12341 ): 18133 I/Os completed (+1656) 00:16:51.491 00:16:52.426 QEMU NVMe Ctrl (12340 ): 18923 I/Os completed (+1576) 00:16:52.426 QEMU NVMe Ctrl (12341 ): 19789 I/Os completed (+1656) 00:16:52.426 00:16:52.426 10:08:23 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:16:52.426 10:08:23 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:52.426 10:08:23 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:52.426 10:08:23 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:52.426 [2024-12-09 10:08:23.089961] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:16:52.426 Controller removed: QEMU NVMe Ctrl (12340 ) 00:16:52.426 [2024-12-09 10:08:23.092815] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:52.426 [2024-12-09 10:08:23.093068] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:52.426 [2024-12-09 10:08:23.093185] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:52.426 [2024-12-09 10:08:23.093312] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:52.426 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:16:52.426 [2024-12-09 10:08:23.097716] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:52.426 [2024-12-09 10:08:23.097802] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:52.426 [2024-12-09 10:08:23.097849] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:52.426 [2024-12-09 10:08:23.097882] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:52.426 10:08:23 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:52.426 10:08:23 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:52.426 [2024-12-09 10:08:23.123057] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:16:52.426 Controller removed: QEMU NVMe Ctrl (12341 ) 00:16:52.426 [2024-12-09 10:08:23.128931] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:52.426 [2024-12-09 10:08:23.129156] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:52.426 [2024-12-09 10:08:23.129255] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:52.426 [2024-12-09 10:08:23.129296] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:52.427 10:08:23 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:16:52.427 10:08:23 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:16:52.427 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:16:52.427 EAL: Scan for (pci) bus failed. 00:16:52.427 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:16:52.427 [2024-12-09 10:08:23.132352] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:52.427 [2024-12-09 10:08:23.132476] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:52.427 [2024-12-09 10:08:23.132558] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:52.427 [2024-12-09 10:08:23.132752] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:52.685 10:08:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:52.685 10:08:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:52.685 10:08:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:16:52.685 10:08:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:16:52.685 10:08:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:52.685 10:08:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:52.685 10:08:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:52.685 10:08:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:16:52.685 Attaching to 0000:00:10.0 00:16:52.685 Attached to 0000:00:10.0 00:16:52.685 10:08:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:16:52.685 10:08:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:52.685 10:08:23 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:16:52.685 Attaching to 0000:00:11.0 00:16:52.685 Attached to 0000:00:11.0 00:16:53.254 QEMU NVMe Ctrl (12340 ): 920 I/Os completed (+920) 00:16:53.254 QEMU NVMe Ctrl (12341 ): 928 I/Os completed (+928) 00:16:53.254 00:16:54.188 QEMU NVMe Ctrl (12340 ): 2340 I/Os completed (+1420) 00:16:54.188 QEMU NVMe Ctrl (12341 ): 2537 I/Os completed (+1609) 00:16:54.188 00:16:55.568 QEMU NVMe Ctrl (12340 ): 3980 I/Os completed (+1640) 00:16:55.568 QEMU NVMe Ctrl (12341 ): 4261 I/Os completed (+1724) 00:16:55.568 00:16:56.503 QEMU NVMe Ctrl (12340 ): 5700 I/Os completed (+1720) 00:16:56.503 QEMU NVMe Ctrl (12341 ): 6055 I/Os completed (+1794) 00:16:56.503 00:16:57.472 QEMU NVMe Ctrl (12340 ): 7260 I/Os completed (+1560) 00:16:57.472 QEMU NVMe Ctrl (12341 ): 7702 I/Os completed (+1647) 00:16:57.472 00:16:58.409 QEMU NVMe Ctrl (12340 ): 8936 I/Os completed (+1676) 00:16:58.409 QEMU NVMe Ctrl (12341 ): 9424 I/Os completed (+1722) 00:16:58.409 00:16:59.343 QEMU NVMe Ctrl (12340 ): 10461 I/Os completed (+1525) 00:16:59.343 QEMU NVMe Ctrl (12341 ): 11067 I/Os completed (+1643) 00:16:59.343 00:17:00.280 QEMU NVMe Ctrl (12340 ): 11973 I/Os completed (+1512) 00:17:00.280 QEMU NVMe Ctrl (12341 ): 12722 I/Os completed (+1655) 00:17:00.280 00:17:01.217 QEMU NVMe Ctrl (12340 ): 13410 I/Os completed (+1437) 00:17:01.217 QEMU NVMe Ctrl (12341 ): 14278 I/Os completed (+1556) 00:17:01.217 00:17:02.593 QEMU NVMe Ctrl (12340 ): 14942 I/Os completed (+1532) 00:17:02.593 QEMU NVMe Ctrl (12341 ): 15897 I/Os completed (+1619) 00:17:02.594 00:17:03.529 QEMU NVMe Ctrl (12340 ): 16459 I/Os completed (+1517) 00:17:03.529 QEMU NVMe Ctrl (12341 ): 17507 I/Os completed (+1610) 00:17:03.529 00:17:04.467 QEMU NVMe Ctrl (12340 ): 17891 I/Os completed (+1432) 00:17:04.467 QEMU NVMe Ctrl (12341 ): 18988 I/Os completed (+1481) 00:17:04.467 00:17:04.725 10:08:35 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:17:04.725 10:08:35 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:17:04.725 10:08:35 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:04.725 10:08:35 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:04.725 [2024-12-09 10:08:35.405170] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:17:04.725 Controller removed: QEMU NVMe Ctrl (12340 ) 00:17:04.725 [2024-12-09 10:08:35.407871] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:04.725 [2024-12-09 10:08:35.408032] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:04.725 [2024-12-09 10:08:35.408126] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:04.725 [2024-12-09 10:08:35.408294] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:04.725 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:17:04.725 [2024-12-09 10:08:35.412468] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:04.725 [2024-12-09 10:08:35.412687] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:04.725 [2024-12-09 10:08:35.412773] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:04.725 [2024-12-09 10:08:35.412984] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:04.725 10:08:35 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:04.725 10:08:35 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:04.725 [2024-12-09 10:08:35.428069] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:17:04.725 Controller removed: QEMU NVMe Ctrl (12341 ) 00:17:04.725 [2024-12-09 10:08:35.430625] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:04.725 [2024-12-09 10:08:35.430850] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:04.725 [2024-12-09 10:08:35.431017] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:04.725 [2024-12-09 10:08:35.431196] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:04.725 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:17:04.725 [2024-12-09 10:08:35.434970] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:04.725 [2024-12-09 10:08:35.435157] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:04.725 [2024-12-09 10:08:35.435262] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:04.725 [2024-12-09 10:08:35.435349] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:04.725 10:08:35 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:17:04.725 10:08:35 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:17:04.725 10:08:35 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:04.725 10:08:35 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:04.725 10:08:35 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:17:04.985 10:08:35 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:17:04.985 10:08:35 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:04.985 10:08:35 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:04.985 10:08:35 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:04.985 10:08:35 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:17:04.985 Attaching to 0000:00:10.0 00:17:04.985 Attached to 0000:00:10.0 00:17:04.985 10:08:35 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:17:04.985 10:08:35 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:04.985 10:08:35 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:17:04.985 Attaching to 0000:00:11.0 00:17:04.985 Attached to 0000:00:11.0 00:17:04.985 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:17:04.985 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:17:04.985 [2024-12-09 10:08:35.719008] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:17:17.272 10:08:47 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:17:17.272 10:08:47 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:17:17.272 10:08:47 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.99 00:17:17.272 10:08:47 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.99 00:17:17.272 10:08:47 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:17:17.272 10:08:47 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.99 00:17:17.272 10:08:47 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.99 2 00:17:17.272 remove_attach_helper took 42.99s to complete (handling 2 nvme drive(s)) 10:08:47 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:17:23.834 10:08:53 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 68635 00:17:23.834 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (68635) - No such process 00:17:23.834 10:08:53 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 68635 00:17:23.834 10:08:53 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:17:23.834 10:08:53 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:17:23.834 10:08:53 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:17:23.834 10:08:53 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=69175 00:17:23.834 10:08:53 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:23.834 10:08:53 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:17:23.834 10:08:53 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 69175 00:17:23.834 10:08:53 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 69175 ']' 00:17:23.834 10:08:53 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:23.834 10:08:53 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:23.834 10:08:53 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:23.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:23.834 10:08:53 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:23.834 10:08:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:23.834 [2024-12-09 10:08:53.859992] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:17:23.834 [2024-12-09 10:08:53.860485] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69175 ] 00:17:23.834 [2024-12-09 10:08:54.047645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:23.834 [2024-12-09 10:08:54.181515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:24.770 10:08:55 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:24.770 10:08:55 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:17:24.770 10:08:55 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:17:24.770 10:08:55 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.770 10:08:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:24.770 10:08:55 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.770 10:08:55 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:17:24.770 10:08:55 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:17:24.770 10:08:55 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:17:24.770 10:08:55 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:17:24.770 10:08:55 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:17:24.770 10:08:55 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:17:24.770 10:08:55 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:17:24.770 10:08:55 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:17:24.770 10:08:55 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:17:24.770 10:08:55 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:17:24.770 10:08:55 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:17:24.770 10:08:55 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:17:24.770 10:08:55 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:17:31.356 10:09:01 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:17:31.356 10:09:01 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:31.356 10:09:01 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:31.356 10:09:01 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:31.356 10:09:01 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:31.356 10:09:01 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:17:31.356 10:09:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:31.356 10:09:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:31.356 10:09:01 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:31.356 10:09:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:31.356 10:09:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:31.356 10:09:01 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.356 10:09:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:31.356 10:09:01 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.356 [2024-12-09 10:09:01.311706] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:17:31.356 10:09:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:17:31.356 10:09:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:17:31.356 [2024-12-09 10:09:01.315094] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:31.356 [2024-12-09 10:09:01.315329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:31.356 [2024-12-09 10:09:01.315367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.356 [2024-12-09 10:09:01.315413] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:31.356 [2024-12-09 10:09:01.315430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:31.356 [2024-12-09 10:09:01.315447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.356 [2024-12-09 10:09:01.315463] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:31.356 [2024-12-09 10:09:01.315480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:31.356 [2024-12-09 10:09:01.315494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.356 [2024-12-09 10:09:01.315517] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:31.356 [2024-12-09 10:09:01.315532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:31.356 [2024-12-09 10:09:01.315549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.356 [2024-12-09 10:09:01.711733] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:17:31.356 [2024-12-09 10:09:01.715047] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:31.356 [2024-12-09 10:09:01.715102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:31.356 [2024-12-09 10:09:01.715129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.356 [2024-12-09 10:09:01.715159] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:31.356 [2024-12-09 10:09:01.715178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:31.356 [2024-12-09 10:09:01.715193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.356 [2024-12-09 10:09:01.715212] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:31.356 [2024-12-09 10:09:01.715226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:31.356 [2024-12-09 10:09:01.715243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.356 [2024-12-09 10:09:01.715258] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:31.356 [2024-12-09 10:09:01.715275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:31.356 [2024-12-09 10:09:01.715289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.357 10:09:01 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:17:31.357 10:09:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:31.357 10:09:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:31.357 10:09:01 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:31.357 10:09:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:31.357 10:09:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:31.357 10:09:01 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.357 10:09:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:31.357 10:09:01 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.357 10:09:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:17:31.357 10:09:01 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:17:31.357 10:09:01 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:31.357 10:09:01 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:31.357 10:09:01 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:17:31.357 10:09:02 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:17:31.357 10:09:02 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:31.357 10:09:02 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:31.357 10:09:02 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:31.357 10:09:02 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:17:31.614 10:09:02 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:17:31.614 10:09:02 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:31.614 10:09:02 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:17:43.814 10:09:14 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:17:43.814 10:09:14 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:17:43.814 10:09:14 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:17:43.814 10:09:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:43.814 10:09:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:43.814 10:09:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:43.814 10:09:14 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.814 10:09:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:43.814 10:09:14 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.814 10:09:14 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:17:43.814 10:09:14 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:17:43.814 10:09:14 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:43.814 10:09:14 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:43.814 10:09:14 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:43.814 10:09:14 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:43.814 10:09:14 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:17:43.814 10:09:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:43.814 10:09:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:43.814 10:09:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:43.814 10:09:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:43.814 10:09:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:43.814 10:09:14 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.814 10:09:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:43.814 [2024-12-09 10:09:14.312844] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:17:43.814 [2024-12-09 10:09:14.316141] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:43.814 [2024-12-09 10:09:14.316196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:43.814 [2024-12-09 10:09:14.316224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.814 [2024-12-09 10:09:14.316258] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:43.814 [2024-12-09 10:09:14.316273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:43.814 [2024-12-09 10:09:14.316294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.814 [2024-12-09 10:09:14.316309] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:43.814 [2024-12-09 10:09:14.316326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:43.814 [2024-12-09 10:09:14.316339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.814 [2024-12-09 10:09:14.316357] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:43.814 [2024-12-09 10:09:14.316371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:43.814 [2024-12-09 10:09:14.316387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.814 10:09:14 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.814 10:09:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:17:43.814 10:09:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:17:44.073 [2024-12-09 10:09:14.712832] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:17:44.073 [2024-12-09 10:09:14.716311] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:44.073 [2024-12-09 10:09:14.716539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:44.073 [2024-12-09 10:09:14.716726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.073 [2024-12-09 10:09:14.717015] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:44.073 [2024-12-09 10:09:14.717080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:44.073 [2024-12-09 10:09:14.717280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.073 [2024-12-09 10:09:14.717315] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:44.073 [2024-12-09 10:09:14.717333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:44.073 [2024-12-09 10:09:14.717351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.073 [2024-12-09 10:09:14.717366] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:44.073 [2024-12-09 10:09:14.717383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:44.073 [2024-12-09 10:09:14.717398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.073 10:09:14 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:17:44.073 10:09:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:44.073 10:09:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:44.073 10:09:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:44.073 10:09:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:44.073 10:09:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:44.073 10:09:14 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.073 10:09:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:44.332 10:09:14 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.332 10:09:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:17:44.332 10:09:14 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:17:44.332 10:09:15 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:44.332 10:09:15 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:44.332 10:09:15 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:17:44.332 10:09:15 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:17:44.332 10:09:15 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:44.332 10:09:15 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:44.332 10:09:15 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:44.332 10:09:15 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:17:44.590 10:09:15 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:17:44.590 10:09:15 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:44.590 10:09:15 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:17:56.813 10:09:27 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:17:56.813 10:09:27 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:17:56.813 10:09:27 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:17:56.813 10:09:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:56.813 10:09:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:56.813 10:09:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:56.813 10:09:27 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.813 10:09:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:56.813 10:09:27 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.813 10:09:27 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:17:56.813 10:09:27 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:17:56.813 10:09:27 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:56.813 10:09:27 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:56.813 10:09:27 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:56.813 10:09:27 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:56.813 [2024-12-09 10:09:27.313123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:17:56.813 10:09:27 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:17:56.813 [2024-12-09 10:09:27.317213] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:56.813 10:09:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:56.813 [2024-12-09 10:09:27.317380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:56.813 [2024-12-09 10:09:27.317496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.813 [2024-12-09 10:09:27.317608] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:56.813 [2024-12-09 10:09:27.317749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:56.813 [2024-12-09 10:09:27.317939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.813 [2024-12-09 10:09:27.318209] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:56.813 [2024-12-09 10:09:27.318496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:56.813 [2024-12-09 10:09:27.318782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.813 [2024-12-09 10:09:27.319008] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:56.813 10:09:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:56.813 [2024-12-09 10:09:27.319289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:56.813 [2024-12-09 10:09:27.319543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.813 10:09:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:56.813 10:09:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:56.813 10:09:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:56.813 10:09:27 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.813 10:09:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:56.813 10:09:27 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.813 10:09:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:17:56.813 10:09:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:17:57.072 [2024-12-09 10:09:27.713106] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:17:57.072 [2024-12-09 10:09:27.716071] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:57.072 [2024-12-09 10:09:27.716135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.072 [2024-12-09 10:09:27.716167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.072 [2024-12-09 10:09:27.716194] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:57.072 [2024-12-09 10:09:27.716211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.072 [2024-12-09 10:09:27.716225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.072 [2024-12-09 10:09:27.716258] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:57.072 [2024-12-09 10:09:27.716271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.072 [2024-12-09 10:09:27.716289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.072 [2024-12-09 10:09:27.716302] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:57.072 [2024-12-09 10:09:27.716317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.072 [2024-12-09 10:09:27.716330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.331 10:09:27 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:17:57.331 10:09:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:57.331 10:09:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:57.331 10:09:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:57.331 10:09:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:57.331 10:09:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:57.331 10:09:27 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.331 10:09:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:57.331 10:09:27 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.331 10:09:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:17:57.331 10:09:27 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:17:57.331 10:09:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:57.331 10:09:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:57.331 10:09:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:17:57.331 10:09:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:17:57.590 10:09:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:57.590 10:09:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:57.590 10:09:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:57.590 10:09:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:17:57.590 10:09:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:17:57.590 10:09:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:57.590 10:09:28 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:18:09.870 10:09:40 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:18:09.870 10:09:40 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:18:09.870 10:09:40 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:18:09.870 10:09:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:09.870 10:09:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:09.870 10:09:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:09.870 10:09:40 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.870 10:09:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:09.870 10:09:40 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.870 10:09:40 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:18:09.870 10:09:40 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:18:09.870 10:09:40 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.06 00:18:09.870 10:09:40 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.06 00:18:09.870 10:09:40 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:18:09.870 10:09:40 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.06 00:18:09.870 10:09:40 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.06 2 00:18:09.870 remove_attach_helper took 45.06s to complete (handling 2 nvme drive(s)) 10:09:40 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:18:09.870 10:09:40 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.870 10:09:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:09.870 10:09:40 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.870 10:09:40 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:18:09.870 10:09:40 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.870 10:09:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:09.870 10:09:40 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.870 10:09:40 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:18:09.871 10:09:40 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:18:09.871 10:09:40 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:18:09.871 10:09:40 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:18:09.871 10:09:40 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:18:09.871 10:09:40 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:18:09.871 10:09:40 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:18:09.871 10:09:40 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:18:09.871 10:09:40 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:18:09.871 10:09:40 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:18:09.871 10:09:40 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:18:09.871 10:09:40 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:18:09.871 10:09:40 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:18:16.463 10:09:46 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:18:16.463 10:09:46 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:18:16.463 10:09:46 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:18:16.463 10:09:46 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:18:16.463 10:09:46 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:18:16.463 10:09:46 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:18:16.463 10:09:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:18:16.463 10:09:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:18:16.464 10:09:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:16.464 10:09:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:16.464 10:09:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:16.464 10:09:46 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.464 10:09:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:16.464 [2024-12-09 10:09:46.413586] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:18:16.464 [2024-12-09 10:09:46.417040] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:16.464 10:09:46 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.464 [2024-12-09 10:09:46.417274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:16.464 [2024-12-09 10:09:46.417467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.464 [2024-12-09 10:09:46.417535] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:16.464 [2024-12-09 10:09:46.417568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:16.464 [2024-12-09 10:09:46.417601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.464 [2024-12-09 10:09:46.417627] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:16.464 [2024-12-09 10:09:46.417655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:16.464 [2024-12-09 10:09:46.417684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.464 [2024-12-09 10:09:46.417703] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:16.464 [2024-12-09 10:09:46.417717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:16.464 [2024-12-09 10:09:46.417736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.464 10:09:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:18:16.464 10:09:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:18:16.464 [2024-12-09 10:09:46.813590] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:18:16.464 [2024-12-09 10:09:46.820087] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:16.464 [2024-12-09 10:09:46.820337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:16.464 [2024-12-09 10:09:46.820583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.464 [2024-12-09 10:09:46.820815] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:16.464 [2024-12-09 10:09:46.821091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:16.464 [2024-12-09 10:09:46.821326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.464 [2024-12-09 10:09:46.821565] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:16.464 [2024-12-09 10:09:46.821765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:16.464 [2024-12-09 10:09:46.822041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.464 [2024-12-09 10:09:46.822236] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:16.464 [2024-12-09 10:09:46.822445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:16.464 [2024-12-09 10:09:46.822704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.464 [2024-12-09 10:09:46.822942] bdev_nvme.c:5588:aer_cb: *WARNING*: AER request execute failed 00:18:16.464 [2024-12-09 10:09:46.822982] bdev_nvme.c:5588:aer_cb: *WARNING*: AER request execute failed 00:18:16.464 [2024-12-09 10:09:46.823020] bdev_nvme.c:5588:aer_cb: *WARNING*: AER request execute failed 00:18:16.464 [2024-12-09 10:09:46.823046] bdev_nvme.c:5588:aer_cb: *WARNING*: AER request execute failed 00:18:16.464 10:09:46 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:18:16.464 10:09:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:18:16.464 10:09:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:18:16.464 10:09:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:16.464 10:09:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:16.464 10:09:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:16.464 10:09:46 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.464 10:09:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:16.464 10:09:46 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.464 10:09:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:18:16.464 10:09:46 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:18:16.464 10:09:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:18:16.464 10:09:47 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:18:16.464 10:09:47 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:18:16.464 10:09:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:18:16.464 10:09:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:18:16.464 10:09:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:18:16.464 10:09:47 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:18:16.464 10:09:47 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:18:16.722 10:09:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:18:16.722 10:09:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:18:16.722 10:09:47 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:18:28.965 10:09:59 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:18:28.965 10:09:59 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:18:28.965 10:09:59 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:18:28.965 10:09:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:28.965 10:09:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:28.965 10:09:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:28.965 10:09:59 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.965 10:09:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:28.965 10:09:59 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.965 10:09:59 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:18:28.965 10:09:59 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:18:28.965 10:09:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:18:28.965 10:09:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:18:28.965 10:09:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:18:28.965 10:09:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:18:28.965 10:09:59 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:18:28.965 10:09:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:18:28.965 10:09:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:18:28.965 10:09:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:28.965 10:09:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:28.965 10:09:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:28.965 10:09:59 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.965 10:09:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:28.965 [2024-12-09 10:09:59.413880] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:18:28.966 [2024-12-09 10:09:59.417040] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:28.966 [2024-12-09 10:09:59.417093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:28.966 [2024-12-09 10:09:59.417116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.966 [2024-12-09 10:09:59.417149] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:28.966 [2024-12-09 10:09:59.417165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:28.966 [2024-12-09 10:09:59.417182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.966 [2024-12-09 10:09:59.417199] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:28.966 [2024-12-09 10:09:59.417218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:28.966 [2024-12-09 10:09:59.417232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.966 [2024-12-09 10:09:59.417250] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:28.966 [2024-12-09 10:09:59.417275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:28.966 [2024-12-09 10:09:59.417297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.966 10:09:59 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.966 10:09:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:18:28.966 10:09:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:18:29.224 [2024-12-09 10:09:59.813928] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:18:29.224 [2024-12-09 10:09:59.817612] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:29.224 [2024-12-09 10:09:59.817668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:29.224 [2024-12-09 10:09:59.817695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.224 [2024-12-09 10:09:59.817723] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:29.224 [2024-12-09 10:09:59.817741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:29.224 [2024-12-09 10:09:59.817756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.224 [2024-12-09 10:09:59.817776] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:29.224 [2024-12-09 10:09:59.817790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:29.224 [2024-12-09 10:09:59.817807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.224 [2024-12-09 10:09:59.817822] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:29.224 [2024-12-09 10:09:59.817858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:29.224 [2024-12-09 10:09:59.817874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.224 10:09:59 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:18:29.224 10:09:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:18:29.224 10:09:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:18:29.224 10:09:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:29.225 10:09:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:29.225 10:09:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:29.225 10:09:59 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.225 10:09:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:29.225 10:09:59 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.483 10:10:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:18:29.483 10:10:00 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:18:29.483 10:10:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:18:29.483 10:10:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:18:29.483 10:10:00 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:18:29.483 10:10:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:18:29.483 10:10:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:18:29.483 10:10:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:18:29.483 10:10:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:18:29.483 10:10:00 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:18:29.741 10:10:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:18:29.741 10:10:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:18:29.741 10:10:00 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:18:42.037 10:10:12 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:18:42.037 10:10:12 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:18:42.037 10:10:12 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:18:42.037 10:10:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:42.037 10:10:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:42.037 10:10:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:42.037 10:10:12 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.038 10:10:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:42.038 10:10:12 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.038 10:10:12 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:18:42.038 10:10:12 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:18:42.038 10:10:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:18:42.038 10:10:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:18:42.038 10:10:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:18:42.038 10:10:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:18:42.038 [2024-12-09 10:10:12.414271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:18:42.038 [2024-12-09 10:10:12.417034] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:42.038 [2024-12-09 10:10:12.417214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.038 [2024-12-09 10:10:12.417380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.038 [2024-12-09 10:10:12.417570] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:42.038 [2024-12-09 10:10:12.417756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.038 [2024-12-09 10:10:12.417952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.038 [2024-12-09 10:10:12.418113] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:42.038 [2024-12-09 10:10:12.418253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.038 [2024-12-09 10:10:12.418412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.038 [2024-12-09 10:10:12.418656] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:42.038 [2024-12-09 10:10:12.418859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.038 [2024-12-09 10:10:12.419028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.038 10:10:12 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:18:42.038 10:10:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:18:42.038 10:10:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:18:42.038 10:10:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:42.038 10:10:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:42.038 10:10:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:42.038 10:10:12 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.038 10:10:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:42.038 10:10:12 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.038 10:10:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:18:42.038 10:10:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:18:42.296 10:10:12 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:18:42.296 10:10:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:18:42.296 10:10:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:18:42.297 10:10:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:42.297 10:10:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:42.297 10:10:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:42.297 10:10:12 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.297 10:10:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:42.297 10:10:13 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.297 10:10:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:18:42.297 10:10:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:18:42.556 [2024-12-09 10:10:13.114280] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:18:42.556 [2024-12-09 10:10:13.117743] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:42.556 [2024-12-09 10:10:13.117929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.556 [2024-12-09 10:10:13.118173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.556 [2024-12-09 10:10:13.118431] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:42.556 [2024-12-09 10:10:13.118551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.556 [2024-12-09 10:10:13.118690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.556 [2024-12-09 10:10:13.118915] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:42.556 [2024-12-09 10:10:13.119049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.556 [2024-12-09 10:10:13.119209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.556 [2024-12-09 10:10:13.119293] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:42.556 [2024-12-09 10:10:13.119399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.556 [2024-12-09 10:10:13.119542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.815 10:10:13 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:18:42.815 10:10:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:18:42.815 10:10:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:18:42.815 10:10:13 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:42.815 10:10:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:42.815 10:10:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:42.815 10:10:13 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.815 10:10:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:42.815 10:10:13 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.815 10:10:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:18:42.815 10:10:13 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:18:43.074 10:10:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:18:43.074 10:10:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:18:43.074 10:10:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:18:43.074 10:10:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:18:43.074 10:10:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:18:43.074 10:10:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:18:43.074 10:10:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:18:43.074 10:10:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:18:43.333 10:10:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:18:43.333 10:10:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:18:43.333 10:10:13 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:18:55.538 10:10:25 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:18:55.538 10:10:25 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:18:55.538 10:10:25 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:18:55.538 10:10:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:55.538 10:10:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:55.538 10:10:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:55.538 10:10:25 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.538 10:10:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:55.538 10:10:25 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.538 10:10:25 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:18:55.538 10:10:25 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:18:55.538 10:10:25 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.63 00:18:55.538 10:10:25 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.63 00:18:55.538 10:10:25 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:18:55.538 10:10:25 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.63 00:18:55.538 10:10:25 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.63 2 00:18:55.538 remove_attach_helper took 45.63s to complete (handling 2 nvme drive(s)) 10:10:25 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:18:55.538 10:10:25 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 69175 00:18:55.538 10:10:25 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 69175 ']' 00:18:55.538 10:10:25 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 69175 00:18:55.538 10:10:25 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:18:55.538 10:10:25 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:55.538 10:10:25 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69175 00:18:55.538 10:10:25 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:55.538 10:10:25 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:55.538 killing process with pid 69175 00:18:55.538 10:10:25 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69175' 00:18:55.538 10:10:25 sw_hotplug -- common/autotest_common.sh@973 -- # kill 69175 00:18:55.538 10:10:25 sw_hotplug -- common/autotest_common.sh@978 -- # wait 69175 00:18:58.071 10:10:28 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:58.331 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:58.898 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:58.898 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:58.898 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:18:59.157 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:18:59.157 00:18:59.157 real 2m32.984s 00:18:59.157 user 1m53.035s 00:18:59.157 sys 0m19.853s 00:18:59.157 10:10:29 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:59.157 ************************************ 00:18:59.157 END TEST sw_hotplug 00:18:59.157 ************************************ 00:18:59.157 10:10:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:59.157 10:10:29 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:18:59.157 10:10:29 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:18:59.157 10:10:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:59.157 10:10:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:59.157 10:10:29 -- common/autotest_common.sh@10 -- # set +x 00:18:59.157 ************************************ 00:18:59.157 START TEST nvme_xnvme 00:18:59.157 ************************************ 00:18:59.157 10:10:29 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:18:59.157 * Looking for test storage... 00:18:59.157 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:18:59.157 10:10:29 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:59.157 10:10:29 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:18:59.157 10:10:29 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:59.418 10:10:30 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:59.418 10:10:30 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:59.418 10:10:30 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:59.418 10:10:30 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:59.418 10:10:30 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:18:59.418 10:10:30 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:18:59.418 10:10:30 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:18:59.418 10:10:30 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:18:59.418 10:10:30 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:18:59.418 10:10:30 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:18:59.418 10:10:30 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:18:59.418 10:10:30 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:59.418 10:10:30 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:18:59.418 10:10:30 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:18:59.418 10:10:30 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:59.418 10:10:30 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:59.418 10:10:30 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:18:59.418 10:10:30 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:18:59.418 10:10:30 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:59.418 10:10:30 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:18:59.418 10:10:30 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:18:59.418 10:10:30 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:18:59.418 10:10:30 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:18:59.418 10:10:30 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:59.418 10:10:30 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:18:59.418 10:10:30 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:18:59.418 10:10:30 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:59.418 10:10:30 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:59.418 10:10:30 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:18:59.418 10:10:30 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:59.418 10:10:30 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:59.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.418 --rc genhtml_branch_coverage=1 00:18:59.418 --rc genhtml_function_coverage=1 00:18:59.418 --rc genhtml_legend=1 00:18:59.418 --rc geninfo_all_blocks=1 00:18:59.418 --rc geninfo_unexecuted_blocks=1 00:18:59.418 00:18:59.418 ' 00:18:59.418 10:10:30 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:59.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.419 --rc genhtml_branch_coverage=1 00:18:59.419 --rc genhtml_function_coverage=1 00:18:59.419 --rc genhtml_legend=1 00:18:59.419 --rc geninfo_all_blocks=1 00:18:59.419 --rc geninfo_unexecuted_blocks=1 00:18:59.419 00:18:59.419 ' 00:18:59.419 10:10:30 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:59.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.419 --rc genhtml_branch_coverage=1 00:18:59.419 --rc genhtml_function_coverage=1 00:18:59.419 --rc genhtml_legend=1 00:18:59.419 --rc geninfo_all_blocks=1 00:18:59.419 --rc geninfo_unexecuted_blocks=1 00:18:59.419 00:18:59.419 ' 00:18:59.419 10:10:30 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:59.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.419 --rc genhtml_branch_coverage=1 00:18:59.419 --rc genhtml_function_coverage=1 00:18:59.419 --rc genhtml_legend=1 00:18:59.419 --rc geninfo_all_blocks=1 00:18:59.419 --rc geninfo_unexecuted_blocks=1 00:18:59.419 00:18:59.419 ' 00:18:59.419 10:10:30 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:18:59.419 10:10:30 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:18:59.419 10:10:30 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:18:59.419 10:10:30 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:18:59.419 10:10:30 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:18:59.419 10:10:30 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:18:59.419 10:10:30 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:18:59.419 10:10:30 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:18:59.419 10:10:30 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:18:59.419 10:10:30 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:18:59.419 10:10:30 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:18:59.420 10:10:30 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:18:59.420 10:10:30 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:18:59.420 10:10:30 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:18:59.420 10:10:30 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:18:59.420 10:10:30 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:18:59.420 10:10:30 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:18:59.420 10:10:30 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:18:59.420 10:10:30 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:18:59.420 10:10:30 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:18:59.420 10:10:30 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:18:59.420 10:10:30 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:18:59.420 10:10:30 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:18:59.420 10:10:30 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:18:59.420 10:10:30 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:18:59.420 10:10:30 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:18:59.420 10:10:30 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:18:59.420 #define SPDK_CONFIG_H 00:18:59.420 #define SPDK_CONFIG_AIO_FSDEV 1 00:18:59.420 #define SPDK_CONFIG_APPS 1 00:18:59.420 #define SPDK_CONFIG_ARCH native 00:18:59.420 #define SPDK_CONFIG_ASAN 1 00:18:59.420 #undef SPDK_CONFIG_AVAHI 00:18:59.420 #undef SPDK_CONFIG_CET 00:18:59.420 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:18:59.420 #define SPDK_CONFIG_COVERAGE 1 00:18:59.420 #define SPDK_CONFIG_CROSS_PREFIX 00:18:59.420 #undef SPDK_CONFIG_CRYPTO 00:18:59.420 #undef SPDK_CONFIG_CRYPTO_MLX5 00:18:59.420 #undef SPDK_CONFIG_CUSTOMOCF 00:18:59.420 #undef SPDK_CONFIG_DAOS 00:18:59.420 #define SPDK_CONFIG_DAOS_DIR 00:18:59.420 #define SPDK_CONFIG_DEBUG 1 00:18:59.420 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:18:59.420 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:18:59.420 #define SPDK_CONFIG_DPDK_INC_DIR 00:18:59.420 #define SPDK_CONFIG_DPDK_LIB_DIR 00:18:59.420 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:18:59.420 #undef SPDK_CONFIG_DPDK_UADK 00:18:59.420 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:18:59.420 #define SPDK_CONFIG_EXAMPLES 1 00:18:59.420 #undef SPDK_CONFIG_FC 00:18:59.420 #define SPDK_CONFIG_FC_PATH 00:18:59.420 #define SPDK_CONFIG_FIO_PLUGIN 1 00:18:59.420 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:18:59.420 #define SPDK_CONFIG_FSDEV 1 00:18:59.420 #undef SPDK_CONFIG_FUSE 00:18:59.420 #undef SPDK_CONFIG_FUZZER 00:18:59.420 #define SPDK_CONFIG_FUZZER_LIB 00:18:59.420 #undef SPDK_CONFIG_GOLANG 00:18:59.420 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:18:59.420 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:18:59.420 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:18:59.420 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:18:59.420 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:18:59.420 #undef SPDK_CONFIG_HAVE_LIBBSD 00:18:59.420 #undef SPDK_CONFIG_HAVE_LZ4 00:18:59.420 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:18:59.420 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:18:59.420 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:18:59.420 #define SPDK_CONFIG_IDXD 1 00:18:59.420 #define SPDK_CONFIG_IDXD_KERNEL 1 00:18:59.420 #undef SPDK_CONFIG_IPSEC_MB 00:18:59.420 #define SPDK_CONFIG_IPSEC_MB_DIR 00:18:59.420 #define SPDK_CONFIG_ISAL 1 00:18:59.420 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:18:59.420 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:18:59.420 #define SPDK_CONFIG_LIBDIR 00:18:59.420 #undef SPDK_CONFIG_LTO 00:18:59.420 #define SPDK_CONFIG_MAX_LCORES 128 00:18:59.420 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:18:59.420 #define SPDK_CONFIG_NVME_CUSE 1 00:18:59.420 #undef SPDK_CONFIG_OCF 00:18:59.420 #define SPDK_CONFIG_OCF_PATH 00:18:59.420 #define SPDK_CONFIG_OPENSSL_PATH 00:18:59.420 #undef SPDK_CONFIG_PGO_CAPTURE 00:18:59.420 #define SPDK_CONFIG_PGO_DIR 00:18:59.420 #undef SPDK_CONFIG_PGO_USE 00:18:59.420 #define SPDK_CONFIG_PREFIX /usr/local 00:18:59.420 #undef SPDK_CONFIG_RAID5F 00:18:59.420 #undef SPDK_CONFIG_RBD 00:18:59.420 #define SPDK_CONFIG_RDMA 1 00:18:59.420 #define SPDK_CONFIG_RDMA_PROV verbs 00:18:59.420 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:18:59.420 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:18:59.420 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:18:59.420 #define SPDK_CONFIG_SHARED 1 00:18:59.420 #undef SPDK_CONFIG_SMA 00:18:59.420 #define SPDK_CONFIG_TESTS 1 00:18:59.420 #undef SPDK_CONFIG_TSAN 00:18:59.420 #define SPDK_CONFIG_UBLK 1 00:18:59.420 #define SPDK_CONFIG_UBSAN 1 00:18:59.420 #undef SPDK_CONFIG_UNIT_TESTS 00:18:59.420 #undef SPDK_CONFIG_URING 00:18:59.420 #define SPDK_CONFIG_URING_PATH 00:18:59.420 #undef SPDK_CONFIG_URING_ZNS 00:18:59.420 #undef SPDK_CONFIG_USDT 00:18:59.420 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:18:59.420 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:18:59.420 #undef SPDK_CONFIG_VFIO_USER 00:18:59.420 #define SPDK_CONFIG_VFIO_USER_DIR 00:18:59.420 #define SPDK_CONFIG_VHOST 1 00:18:59.420 #define SPDK_CONFIG_VIRTIO 1 00:18:59.420 #undef SPDK_CONFIG_VTUNE 00:18:59.420 #define SPDK_CONFIG_VTUNE_DIR 00:18:59.420 #define SPDK_CONFIG_WERROR 1 00:18:59.420 #define SPDK_CONFIG_WPDK_DIR 00:18:59.420 #define SPDK_CONFIG_XNVME 1 00:18:59.420 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:18:59.420 10:10:30 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:18:59.420 10:10:30 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:59.420 10:10:30 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:18:59.420 10:10:30 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:59.420 10:10:30 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:59.420 10:10:30 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:59.420 10:10:30 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.420 10:10:30 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.420 10:10:30 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.420 10:10:30 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:18:59.420 10:10:30 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.420 10:10:30 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:18:59.420 10:10:30 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:18:59.420 10:10:30 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:18:59.420 10:10:30 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:18:59.420 10:10:30 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:18:59.420 10:10:30 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:18:59.420 10:10:30 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:18:59.420 10:10:30 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:18:59.420 10:10:30 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:18:59.420 10:10:30 nvme_xnvme -- pm/common@68 -- # uname -s 00:18:59.420 10:10:30 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:18:59.420 10:10:30 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:18:59.420 10:10:30 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:18:59.420 10:10:30 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:18:59.420 10:10:30 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:18:59.420 10:10:30 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:18:59.420 10:10:30 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:18:59.420 10:10:30 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:18:59.420 10:10:30 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:18:59.421 10:10:30 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:18:59.421 10:10:30 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:18:59.421 10:10:30 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:18:59.421 10:10:30 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:18:59.421 10:10:30 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@58 -- # : 0 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:18:59.421 10:10:30 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 70521 ]] 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 70521 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.2RKYUL 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.2RKYUL/tests/xnvme /tmp/spdk.2RKYUL 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13891911680 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5676171264 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6261665792 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266429440 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:18:59.422 10:10:30 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493775872 00:18:59.423 10:10:30 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506571776 00:18:59.423 10:10:30 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:18:59.423 10:10:30 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:59.423 10:10:30 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:18:59.423 10:10:30 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:18:59.423 10:10:30 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13891911680 00:18:59.423 10:10:30 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:18:59.423 10:10:30 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5676171264 00:18:59.423 10:10:30 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:59.423 10:10:30 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:18:59.423 10:10:30 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:18:59.423 10:10:30 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6266281984 00:18:59.423 10:10:30 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266429440 00:18:59.423 10:10:30 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=147456 00:18:59.423 10:10:30 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:59.423 10:10:30 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:18:59.423 10:10:30 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:18:59.423 10:10:30 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:18:59.423 10:10:30 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:18:59.423 10:10:30 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:18:59.423 10:10:30 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:59.423 10:10:30 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:18:59.423 10:10:30 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:18:59.423 10:10:30 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:18:59.423 10:10:30 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:18:59.423 10:10:30 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:18:59.423 10:10:30 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:59.423 10:10:30 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:18:59.423 10:10:30 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:18:59.423 10:10:30 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253273600 00:18:59.423 10:10:30 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253285888 00:18:59.423 10:10:30 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:18:59.423 10:10:30 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:59.423 10:10:30 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_3/fedora39-libvirt/output 00:18:59.423 10:10:30 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:18:59.423 10:10:30 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=93542322176 00:18:59.423 10:10:30 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:18:59.423 10:10:30 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=6160457728 00:18:59.423 10:10:30 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:59.423 10:10:30 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:18:59.423 * Looking for test storage... 00:18:59.423 10:10:30 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:18:59.423 10:10:30 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:18:59.423 10:10:30 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:18:59.423 10:10:30 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:18:59.423 10:10:30 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:18:59.423 10:10:30 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13891911680 00:18:59.423 10:10:30 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:18:59.423 10:10:30 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:18:59.683 10:10:30 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:18:59.683 10:10:30 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:18:59.683 10:10:30 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:18:59.683 10:10:30 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:18:59.683 10:10:30 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:18:59.683 10:10:30 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:18:59.683 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:18:59.683 10:10:30 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:18:59.683 10:10:30 nvme_xnvme -- common/autotest_common.sh@1698 -- # set -o errtrace 00:18:59.683 10:10:30 nvme_xnvme -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:18:59.683 10:10:30 nvme_xnvme -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:18:59.683 10:10:30 nvme_xnvme -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:18:59.683 10:10:30 nvme_xnvme -- common/autotest_common.sh@1703 -- # true 00:18:59.683 10:10:30 nvme_xnvme -- common/autotest_common.sh@1705 -- # xtrace_fd 00:18:59.684 10:10:30 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:18:59.684 10:10:30 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:18:59.684 10:10:30 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:18:59.684 10:10:30 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:18:59.684 10:10:30 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:18:59.684 10:10:30 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:18:59.684 10:10:30 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:18:59.684 10:10:30 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:18:59.684 10:10:30 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:59.684 10:10:30 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:18:59.684 10:10:30 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:59.684 10:10:30 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:59.684 10:10:30 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:59.684 10:10:30 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:59.684 10:10:30 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:59.684 10:10:30 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:18:59.684 10:10:30 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:18:59.684 10:10:30 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:18:59.684 10:10:30 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:18:59.684 10:10:30 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:18:59.684 10:10:30 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:18:59.684 10:10:30 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:18:59.684 10:10:30 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:59.684 10:10:30 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:18:59.684 10:10:30 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:18:59.684 10:10:30 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:59.684 10:10:30 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:59.684 10:10:30 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:18:59.684 10:10:30 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:18:59.684 10:10:30 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:59.684 10:10:30 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:18:59.684 10:10:30 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:18:59.684 10:10:30 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:18:59.684 10:10:30 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:18:59.684 10:10:30 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:59.684 10:10:30 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:18:59.684 10:10:30 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:18:59.684 10:10:30 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:59.684 10:10:30 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:59.684 10:10:30 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:18:59.684 10:10:30 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:59.684 10:10:30 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:59.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.684 --rc genhtml_branch_coverage=1 00:18:59.684 --rc genhtml_function_coverage=1 00:18:59.684 --rc genhtml_legend=1 00:18:59.684 --rc geninfo_all_blocks=1 00:18:59.684 --rc geninfo_unexecuted_blocks=1 00:18:59.684 00:18:59.684 ' 00:18:59.684 10:10:30 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:59.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.684 --rc genhtml_branch_coverage=1 00:18:59.684 --rc genhtml_function_coverage=1 00:18:59.684 --rc genhtml_legend=1 00:18:59.684 --rc geninfo_all_blocks=1 00:18:59.684 --rc geninfo_unexecuted_blocks=1 00:18:59.684 00:18:59.684 ' 00:18:59.684 10:10:30 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:59.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.684 --rc genhtml_branch_coverage=1 00:18:59.684 --rc genhtml_function_coverage=1 00:18:59.684 --rc genhtml_legend=1 00:18:59.684 --rc geninfo_all_blocks=1 00:18:59.684 --rc geninfo_unexecuted_blocks=1 00:18:59.684 00:18:59.684 ' 00:18:59.684 10:10:30 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:59.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.684 --rc genhtml_branch_coverage=1 00:18:59.684 --rc genhtml_function_coverage=1 00:18:59.684 --rc genhtml_legend=1 00:18:59.684 --rc geninfo_all_blocks=1 00:18:59.684 --rc geninfo_unexecuted_blocks=1 00:18:59.684 00:18:59.684 ' 00:18:59.684 10:10:30 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:59.684 10:10:30 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:18:59.684 10:10:30 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:59.684 10:10:30 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:59.684 10:10:30 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:59.684 10:10:30 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.684 10:10:30 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.684 10:10:30 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.684 10:10:30 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:18:59.684 10:10:30 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.684 10:10:30 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:18:59.684 10:10:30 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:18:59.684 10:10:30 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:18:59.684 10:10:30 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:18:59.684 10:10:30 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:18:59.684 10:10:30 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:18:59.684 10:10:30 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:18:59.684 10:10:30 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:18:59.684 10:10:30 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:18:59.684 10:10:30 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:18:59.684 10:10:30 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:18:59.684 10:10:30 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:18:59.684 10:10:30 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:18:59.684 10:10:30 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:18:59.684 10:10:30 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:18:59.684 10:10:30 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:18:59.684 10:10:30 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:18:59.684 10:10:30 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:18:59.684 10:10:30 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:18:59.684 10:10:30 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:18:59.684 10:10:30 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:18:59.684 10:10:30 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:18:59.943 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:00.202 Waiting for block devices as requested 00:19:00.202 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:00.460 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:00.460 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:19:00.460 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:19:05.753 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:19:05.753 10:10:36 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:19:06.011 10:10:36 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:19:06.011 10:10:36 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:19:06.270 10:10:36 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:19:06.270 10:10:36 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:19:06.270 10:10:36 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:19:06.270 10:10:36 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:19:06.270 10:10:36 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:19:06.270 No valid GPT data, bailing 00:19:06.270 10:10:36 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:06.270 10:10:36 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:19:06.270 10:10:36 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:19:06.270 10:10:36 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:19:06.270 10:10:36 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:19:06.270 10:10:36 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:19:06.270 10:10:36 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:19:06.270 10:10:36 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:19:06.270 10:10:36 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:19:06.270 10:10:36 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:19:06.270 10:10:36 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:19:06.270 10:10:36 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:19:06.270 10:10:36 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:19:06.270 10:10:36 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:19:06.270 10:10:36 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:19:06.270 10:10:36 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:19:06.270 10:10:36 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:19:06.270 10:10:36 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:06.270 10:10:36 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:06.270 10:10:36 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:06.270 ************************************ 00:19:06.270 START TEST xnvme_rpc 00:19:06.270 ************************************ 00:19:06.270 10:10:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:19:06.270 10:10:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:19:06.270 10:10:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:19:06.270 10:10:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:19:06.270 10:10:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:19:06.270 10:10:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70913 00:19:06.270 10:10:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:06.270 10:10:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70913 00:19:06.270 10:10:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70913 ']' 00:19:06.270 10:10:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:06.270 10:10:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:06.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:06.270 10:10:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:06.270 10:10:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:06.270 10:10:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:06.270 [2024-12-09 10:10:37.052601] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:19:06.270 [2024-12-09 10:10:37.052908] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70913 ] 00:19:06.529 [2024-12-09 10:10:37.251407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:06.788 [2024-12-09 10:10:37.430946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:07.725 10:10:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:07.725 10:10:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:19:07.725 10:10:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:19:07.725 10:10:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.725 10:10:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:07.725 xnvme_bdev 00:19:07.725 10:10:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.725 10:10:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:19:07.725 10:10:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:07.725 10:10:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:19:07.725 10:10:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.725 10:10:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:07.725 10:10:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.725 10:10:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:19:07.725 10:10:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:19:07.725 10:10:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:07.725 10:10:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.725 10:10:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:07.725 10:10:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:19:07.725 10:10:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.725 10:10:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:19:07.725 10:10:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:19:07.725 10:10:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:07.725 10:10:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.725 10:10:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:07.725 10:10:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:19:07.725 10:10:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.983 10:10:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:19:07.983 10:10:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:19:07.983 10:10:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:07.983 10:10:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.983 10:10:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:07.983 10:10:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:19:07.983 10:10:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.983 10:10:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:19:07.983 10:10:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:19:07.983 10:10:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.983 10:10:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:07.983 10:10:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.983 10:10:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70913 00:19:07.983 10:10:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70913 ']' 00:19:07.983 10:10:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70913 00:19:07.983 10:10:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:19:07.983 10:10:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:07.983 10:10:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70913 00:19:07.983 10:10:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:07.983 10:10:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:07.983 killing process with pid 70913 00:19:07.983 10:10:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70913' 00:19:07.983 10:10:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70913 00:19:07.983 10:10:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70913 00:19:10.518 00:19:10.518 real 0m4.009s 00:19:10.518 user 0m4.101s 00:19:10.518 sys 0m0.709s 00:19:10.518 10:10:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:10.518 10:10:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:10.518 ************************************ 00:19:10.518 END TEST xnvme_rpc 00:19:10.518 ************************************ 00:19:10.518 10:10:40 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:19:10.518 10:10:40 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:10.518 10:10:40 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:10.518 10:10:40 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:10.518 ************************************ 00:19:10.518 START TEST xnvme_bdevperf 00:19:10.518 ************************************ 00:19:10.518 10:10:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:19:10.518 10:10:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:19:10.518 10:10:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:19:10.518 10:10:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:10.518 10:10:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:19:10.518 10:10:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:10.518 10:10:40 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:10.518 10:10:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:10.518 { 00:19:10.518 "subsystems": [ 00:19:10.518 { 00:19:10.518 "subsystem": "bdev", 00:19:10.518 "config": [ 00:19:10.518 { 00:19:10.518 "params": { 00:19:10.518 "io_mechanism": "libaio", 00:19:10.518 "conserve_cpu": false, 00:19:10.518 "filename": "/dev/nvme0n1", 00:19:10.518 "name": "xnvme_bdev" 00:19:10.518 }, 00:19:10.518 "method": "bdev_xnvme_create" 00:19:10.518 }, 00:19:10.518 { 00:19:10.518 "method": "bdev_wait_for_examine" 00:19:10.518 } 00:19:10.518 ] 00:19:10.518 } 00:19:10.518 ] 00:19:10.518 } 00:19:10.518 [2024-12-09 10:10:41.095597] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:19:10.518 [2024-12-09 10:10:41.095897] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70998 ] 00:19:10.518 [2024-12-09 10:10:41.283684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.777 [2024-12-09 10:10:41.432252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:11.344 Running I/O for 5 seconds... 00:19:13.213 29794.00 IOPS, 116.38 MiB/s [2024-12-09T10:10:44.945Z] 29217.00 IOPS, 114.13 MiB/s [2024-12-09T10:10:45.881Z] 28873.33 IOPS, 112.79 MiB/s [2024-12-09T10:10:47.265Z] 28274.75 IOPS, 110.45 MiB/s 00:19:16.468 Latency(us) 00:19:16.468 [2024-12-09T10:10:47.265Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:16.468 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:19:16.468 xnvme_bdev : 5.00 28710.69 112.15 0.00 0.00 2223.66 229.00 4498.15 00:19:16.468 [2024-12-09T10:10:47.265Z] =================================================================================================================== 00:19:16.468 [2024-12-09T10:10:47.265Z] Total : 28710.69 112.15 0.00 0.00 2223.66 229.00 4498.15 00:19:17.402 10:10:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:17.402 10:10:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:19:17.402 10:10:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:17.402 10:10:48 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:17.402 10:10:48 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:17.661 { 00:19:17.661 "subsystems": [ 00:19:17.661 { 00:19:17.661 "subsystem": "bdev", 00:19:17.661 "config": [ 00:19:17.661 { 00:19:17.661 "params": { 00:19:17.661 "io_mechanism": "libaio", 00:19:17.661 "conserve_cpu": false, 00:19:17.661 "filename": "/dev/nvme0n1", 00:19:17.661 "name": "xnvme_bdev" 00:19:17.661 }, 00:19:17.661 "method": "bdev_xnvme_create" 00:19:17.661 }, 00:19:17.661 { 00:19:17.661 "method": "bdev_wait_for_examine" 00:19:17.661 } 00:19:17.661 ] 00:19:17.661 } 00:19:17.661 ] 00:19:17.661 } 00:19:17.661 [2024-12-09 10:10:48.274639] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:19:17.661 [2024-12-09 10:10:48.274891] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71080 ] 00:19:17.920 [2024-12-09 10:10:48.469967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.920 [2024-12-09 10:10:48.630349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:18.487 Running I/O for 5 seconds... 00:19:20.355 27832.00 IOPS, 108.72 MiB/s [2024-12-09T10:10:52.086Z] 27957.00 IOPS, 109.21 MiB/s [2024-12-09T10:10:53.461Z] 27815.33 IOPS, 108.65 MiB/s [2024-12-09T10:10:54.397Z] 27234.25 IOPS, 106.38 MiB/s 00:19:23.600 Latency(us) 00:19:23.600 [2024-12-09T10:10:54.397Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:23.600 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:19:23.600 xnvme_bdev : 5.00 27166.09 106.12 0.00 0.00 2349.84 255.07 5242.88 00:19:23.600 [2024-12-09T10:10:54.397Z] =================================================================================================================== 00:19:23.600 [2024-12-09T10:10:54.397Z] Total : 27166.09 106.12 0.00 0.00 2349.84 255.07 5242.88 00:19:24.977 00:19:24.977 real 0m14.418s 00:19:24.977 user 0m5.852s 00:19:24.978 sys 0m6.212s 00:19:24.978 10:10:55 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:24.978 10:10:55 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:24.978 ************************************ 00:19:24.978 END TEST xnvme_bdevperf 00:19:24.978 ************************************ 00:19:24.978 10:10:55 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:19:24.978 10:10:55 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:24.978 10:10:55 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:24.978 10:10:55 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:24.978 ************************************ 00:19:24.978 START TEST xnvme_fio_plugin 00:19:24.978 ************************************ 00:19:24.978 10:10:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:19:24.978 10:10:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:19:24.978 10:10:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:19:24.978 10:10:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:24.978 10:10:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:24.978 10:10:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:24.978 10:10:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:24.978 10:10:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:19:24.978 10:10:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:24.978 10:10:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:24.978 10:10:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:24.978 10:10:55 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:19:24.978 10:10:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:19:24.978 10:10:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:24.978 10:10:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:24.978 10:10:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:24.978 10:10:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:24.978 10:10:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:19:24.978 10:10:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:24.978 10:10:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:24.978 10:10:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:24.978 10:10:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:19:24.978 10:10:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:24.978 10:10:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:24.978 { 00:19:24.978 "subsystems": [ 00:19:24.978 { 00:19:24.978 "subsystem": "bdev", 00:19:24.978 "config": [ 00:19:24.978 { 00:19:24.978 "params": { 00:19:24.978 "io_mechanism": "libaio", 00:19:24.978 "conserve_cpu": false, 00:19:24.978 "filename": "/dev/nvme0n1", 00:19:24.978 "name": "xnvme_bdev" 00:19:24.978 }, 00:19:24.978 "method": "bdev_xnvme_create" 00:19:24.978 }, 00:19:24.978 { 00:19:24.978 "method": "bdev_wait_for_examine" 00:19:24.978 } 00:19:24.978 ] 00:19:24.978 } 00:19:24.978 ] 00:19:24.978 } 00:19:24.978 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:19:24.978 fio-3.35 00:19:24.978 Starting 1 thread 00:19:31.561 00:19:31.561 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71205: Mon Dec 9 10:11:01 2024 00:19:31.561 read: IOPS=26.6k, BW=104MiB/s (109MB/s)(519MiB/5001msec) 00:19:31.561 slat (usec): min=5, max=786, avg=33.45, stdev=28.73 00:19:31.561 clat (usec): min=89, max=5620, avg=1323.47, stdev=718.98 00:19:31.561 lat (usec): min=164, max=5663, avg=1356.91, stdev=721.77 00:19:31.561 clat percentiles (usec): 00:19:31.561 | 1.00th=[ 239], 5.00th=[ 347], 10.00th=[ 453], 20.00th=[ 652], 00:19:31.561 | 30.00th=[ 848], 40.00th=[ 1045], 50.00th=[ 1237], 60.00th=[ 1434], 00:19:31.561 | 70.00th=[ 1663], 80.00th=[ 1926], 90.00th=[ 2311], 95.00th=[ 2606], 00:19:31.561 | 99.00th=[ 3326], 99.50th=[ 3752], 99.90th=[ 4490], 99.95th=[ 4686], 00:19:31.561 | 99.99th=[ 5014] 00:19:31.561 bw ( KiB/s): min=92335, max=131152, per=100.00%, avg=107604.33, stdev=11137.05, samples=9 00:19:31.561 iops : min=23083, max=32788, avg=26901.00, stdev=2784.39, samples=9 00:19:31.561 lat (usec) : 100=0.01%, 250=1.26%, 500=11.08%, 750=12.60%, 1000=12.91% 00:19:31.561 lat (msec) : 2=44.48%, 4=17.37%, 10=0.31% 00:19:31.561 cpu : usr=25.28%, sys=54.08%, ctx=197, majf=0, minf=636 00:19:31.561 IO depths : 1=0.1%, 2=1.4%, 4=5.0%, 8=12.2%, 16=26.1%, 32=53.4%, >=64=1.7% 00:19:31.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.561 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:19:31.561 issued rwts: total=132964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:31.561 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:31.561 00:19:31.561 Run status group 0 (all jobs): 00:19:31.561 READ: bw=104MiB/s (109MB/s), 104MiB/s-104MiB/s (109MB/s-109MB/s), io=519MiB (545MB), run=5001-5001msec 00:19:32.497 ----------------------------------------------------- 00:19:32.497 Suppressions used: 00:19:32.497 count bytes template 00:19:32.497 1 11 /usr/src/fio/parse.c 00:19:32.497 1 8 libtcmalloc_minimal.so 00:19:32.497 1 904 libcrypto.so 00:19:32.497 ----------------------------------------------------- 00:19:32.497 00:19:32.497 10:11:03 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:32.497 10:11:03 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:32.497 10:11:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:32.497 10:11:03 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:19:32.497 10:11:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:32.497 10:11:03 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:19:32.497 10:11:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:32.497 10:11:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:32.497 10:11:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:32.497 10:11:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:32.497 10:11:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:19:32.498 10:11:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:32.498 10:11:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:32.498 10:11:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:32.498 10:11:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:32.498 10:11:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:19:32.498 10:11:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:32.498 10:11:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:32.498 10:11:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:19:32.498 10:11:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:32.498 10:11:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:32.498 { 00:19:32.498 "subsystems": [ 00:19:32.498 { 00:19:32.498 "subsystem": "bdev", 00:19:32.498 "config": [ 00:19:32.498 { 00:19:32.498 "params": { 00:19:32.498 "io_mechanism": "libaio", 00:19:32.498 "conserve_cpu": false, 00:19:32.498 "filename": "/dev/nvme0n1", 00:19:32.498 "name": "xnvme_bdev" 00:19:32.498 }, 00:19:32.498 "method": "bdev_xnvme_create" 00:19:32.498 }, 00:19:32.498 { 00:19:32.498 "method": "bdev_wait_for_examine" 00:19:32.498 } 00:19:32.498 ] 00:19:32.498 } 00:19:32.498 ] 00:19:32.498 } 00:19:32.757 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:19:32.757 fio-3.35 00:19:32.757 Starting 1 thread 00:19:39.318 00:19:39.318 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71309: Mon Dec 9 10:11:09 2024 00:19:39.318 write: IOPS=25.8k, BW=101MiB/s (106MB/s)(503MiB/5001msec); 0 zone resets 00:19:39.318 slat (usec): min=5, max=746, avg=34.49, stdev=27.60 00:19:39.318 clat (usec): min=115, max=6082, avg=1361.38, stdev=726.56 00:19:39.318 lat (usec): min=166, max=6214, avg=1395.87, stdev=728.88 00:19:39.318 clat percentiles (usec): 00:19:39.318 | 1.00th=[ 243], 5.00th=[ 351], 10.00th=[ 465], 20.00th=[ 676], 00:19:39.318 | 30.00th=[ 881], 40.00th=[ 1074], 50.00th=[ 1287], 60.00th=[ 1500], 00:19:39.318 | 70.00th=[ 1745], 80.00th=[ 2008], 90.00th=[ 2343], 95.00th=[ 2606], 00:19:39.318 | 99.00th=[ 3261], 99.50th=[ 3720], 99.90th=[ 4424], 99.95th=[ 4621], 00:19:39.318 | 99.99th=[ 5276] 00:19:39.318 bw ( KiB/s): min=91496, max=119457, per=99.61%, avg=102650.00, stdev=8071.15, samples=9 00:19:39.318 iops : min=22874, max=29864, avg=25662.44, stdev=2017.75, samples=9 00:19:39.318 lat (usec) : 250=1.19%, 500=10.48%, 750=12.08%, 1000=12.55% 00:19:39.318 lat (msec) : 2=43.42%, 4=19.98%, 10=0.30% 00:19:39.318 cpu : usr=25.00%, sys=54.52%, ctx=70, majf=0, minf=633 00:19:39.318 IO depths : 1=0.1%, 2=1.5%, 4=5.3%, 8=12.3%, 16=26.0%, 32=53.2%, >=64=1.7% 00:19:39.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.318 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:19:39.318 issued rwts: total=0,128840,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:39.318 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:39.318 00:19:39.318 Run status group 0 (all jobs): 00:19:39.318 WRITE: bw=101MiB/s (106MB/s), 101MiB/s-101MiB/s (106MB/s-106MB/s), io=503MiB (528MB), run=5001-5001msec 00:19:40.276 ----------------------------------------------------- 00:19:40.276 Suppressions used: 00:19:40.276 count bytes template 00:19:40.276 1 11 /usr/src/fio/parse.c 00:19:40.276 1 8 libtcmalloc_minimal.so 00:19:40.276 1 904 libcrypto.so 00:19:40.276 ----------------------------------------------------- 00:19:40.276 00:19:40.276 00:19:40.276 real 0m15.451s 00:19:40.276 user 0m6.700s 00:19:40.276 sys 0m6.344s 00:19:40.276 10:11:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:40.276 ************************************ 00:19:40.276 END TEST xnvme_fio_plugin 00:19:40.276 ************************************ 00:19:40.276 10:11:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:40.276 10:11:10 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:19:40.276 10:11:10 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:19:40.276 10:11:10 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:19:40.276 10:11:10 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:19:40.276 10:11:10 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:40.276 10:11:10 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:40.276 10:11:10 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:40.276 ************************************ 00:19:40.276 START TEST xnvme_rpc 00:19:40.276 ************************************ 00:19:40.277 10:11:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:19:40.277 10:11:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:19:40.277 10:11:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:19:40.277 10:11:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:19:40.277 10:11:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:19:40.277 10:11:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71395 00:19:40.277 10:11:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71395 00:19:40.277 10:11:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:40.277 10:11:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71395 ']' 00:19:40.277 10:11:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:40.277 10:11:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:40.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:40.277 10:11:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:40.277 10:11:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:40.277 10:11:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:40.535 [2024-12-09 10:11:11.081558] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:19:40.535 [2024-12-09 10:11:11.081778] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71395 ] 00:19:40.535 [2024-12-09 10:11:11.262936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.794 [2024-12-09 10:11:11.418446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:41.729 10:11:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:41.729 10:11:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:19:41.729 10:11:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:19:41.729 10:11:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.729 10:11:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:41.729 xnvme_bdev 00:19:41.729 10:11:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.729 10:11:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:19:41.729 10:11:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:41.729 10:11:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.729 10:11:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:19:41.729 10:11:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:41.729 10:11:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.729 10:11:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:19:41.729 10:11:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:19:41.729 10:11:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:41.729 10:11:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:19:41.729 10:11:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.729 10:11:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:41.729 10:11:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.729 10:11:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:19:41.729 10:11:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:19:41.729 10:11:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:41.729 10:11:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:19:41.729 10:11:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.729 10:11:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:41.988 10:11:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.988 10:11:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:19:41.988 10:11:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:19:41.988 10:11:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:41.988 10:11:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.988 10:11:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:41.988 10:11:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:19:41.988 10:11:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.988 10:11:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:19:41.988 10:11:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:19:41.988 10:11:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.988 10:11:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:41.988 10:11:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.988 10:11:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71395 00:19:41.988 10:11:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71395 ']' 00:19:41.988 10:11:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71395 00:19:41.988 10:11:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:19:41.988 10:11:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:41.988 10:11:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71395 00:19:41.988 10:11:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:41.988 10:11:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:41.988 killing process with pid 71395 00:19:41.988 10:11:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71395' 00:19:41.988 10:11:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71395 00:19:41.988 10:11:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71395 00:19:44.568 00:19:44.568 real 0m4.311s 00:19:44.568 user 0m4.397s 00:19:44.568 sys 0m0.685s 00:19:44.568 10:11:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:44.568 10:11:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:44.568 ************************************ 00:19:44.568 END TEST xnvme_rpc 00:19:44.568 ************************************ 00:19:44.568 10:11:15 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:19:44.568 10:11:15 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:44.568 10:11:15 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:44.568 10:11:15 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:44.568 ************************************ 00:19:44.568 START TEST xnvme_bdevperf 00:19:44.568 ************************************ 00:19:44.568 10:11:15 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:19:44.568 10:11:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:19:44.568 10:11:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:19:44.568 10:11:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:44.568 10:11:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:19:44.568 10:11:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:44.568 10:11:15 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:44.568 10:11:15 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:44.828 { 00:19:44.828 "subsystems": [ 00:19:44.828 { 00:19:44.828 "subsystem": "bdev", 00:19:44.828 "config": [ 00:19:44.828 { 00:19:44.828 "params": { 00:19:44.828 "io_mechanism": "libaio", 00:19:44.828 "conserve_cpu": true, 00:19:44.828 "filename": "/dev/nvme0n1", 00:19:44.828 "name": "xnvme_bdev" 00:19:44.828 }, 00:19:44.828 "method": "bdev_xnvme_create" 00:19:44.828 }, 00:19:44.828 { 00:19:44.828 "method": "bdev_wait_for_examine" 00:19:44.828 } 00:19:44.828 ] 00:19:44.828 } 00:19:44.828 ] 00:19:44.828 } 00:19:44.828 [2024-12-09 10:11:15.462075] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:19:44.828 [2024-12-09 10:11:15.462327] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71482 ] 00:19:45.086 [2024-12-09 10:11:15.663468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:45.086 [2024-12-09 10:11:15.836074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:45.653 Running I/O for 5 seconds... 00:19:47.577 28124.00 IOPS, 109.86 MiB/s [2024-12-09T10:11:19.309Z] 28977.00 IOPS, 113.19 MiB/s [2024-12-09T10:11:20.683Z] 28421.67 IOPS, 111.02 MiB/s [2024-12-09T10:11:21.621Z] 27641.00 IOPS, 107.97 MiB/s 00:19:50.824 Latency(us) 00:19:50.824 [2024-12-09T10:11:21.621Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:50.824 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:19:50.824 xnvme_bdev : 5.00 27575.13 107.72 0.00 0.00 2315.25 301.61 18588.39 00:19:50.824 [2024-12-09T10:11:21.621Z] =================================================================================================================== 00:19:50.824 [2024-12-09T10:11:21.621Z] Total : 27575.13 107.72 0.00 0.00 2315.25 301.61 18588.39 00:19:52.201 10:11:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:52.201 10:11:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:19:52.201 10:11:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:52.201 10:11:22 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:52.201 10:11:22 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:52.201 { 00:19:52.201 "subsystems": [ 00:19:52.201 { 00:19:52.201 "subsystem": "bdev", 00:19:52.201 "config": [ 00:19:52.201 { 00:19:52.201 "params": { 00:19:52.201 "io_mechanism": "libaio", 00:19:52.201 "conserve_cpu": true, 00:19:52.201 "filename": "/dev/nvme0n1", 00:19:52.201 "name": "xnvme_bdev" 00:19:52.201 }, 00:19:52.201 "method": "bdev_xnvme_create" 00:19:52.201 }, 00:19:52.201 { 00:19:52.201 "method": "bdev_wait_for_examine" 00:19:52.201 } 00:19:52.201 ] 00:19:52.201 } 00:19:52.201 ] 00:19:52.201 } 00:19:52.201 [2024-12-09 10:11:22.685862] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:19:52.201 [2024-12-09 10:11:22.686331] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71568 ] 00:19:52.201 [2024-12-09 10:11:22.877494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:52.460 [2024-12-09 10:11:23.028553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:52.719 Running I/O for 5 seconds... 00:19:55.044 27235.00 IOPS, 106.39 MiB/s [2024-12-09T10:11:26.779Z] 25927.50 IOPS, 101.28 MiB/s [2024-12-09T10:11:27.723Z] 25766.33 IOPS, 100.65 MiB/s [2024-12-09T10:11:28.672Z] 25323.00 IOPS, 98.92 MiB/s [2024-12-09T10:11:28.672Z] 24824.60 IOPS, 96.97 MiB/s 00:19:57.875 Latency(us) 00:19:57.875 [2024-12-09T10:11:28.672Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:57.875 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:19:57.875 xnvme_bdev : 5.01 24796.00 96.86 0.00 0.00 2573.41 314.65 7328.12 00:19:57.875 [2024-12-09T10:11:28.672Z] =================================================================================================================== 00:19:57.875 [2024-12-09T10:11:28.672Z] Total : 24796.00 96.86 0.00 0.00 2573.41 314.65 7328.12 00:19:59.253 00:19:59.253 real 0m14.386s 00:19:59.253 user 0m5.829s 00:19:59.253 sys 0m6.108s 00:19:59.253 10:11:29 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:59.253 10:11:29 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:59.253 ************************************ 00:19:59.253 END TEST xnvme_bdevperf 00:19:59.253 ************************************ 00:19:59.253 10:11:29 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:19:59.253 10:11:29 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:59.253 10:11:29 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:59.253 10:11:29 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:59.253 ************************************ 00:19:59.253 START TEST xnvme_fio_plugin 00:19:59.253 ************************************ 00:19:59.253 10:11:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:19:59.253 10:11:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:19:59.253 10:11:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:19:59.253 10:11:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:59.253 10:11:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:59.253 10:11:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:59.253 10:11:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:59.253 10:11:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:19:59.253 10:11:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:59.253 10:11:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:59.253 10:11:29 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:19:59.253 10:11:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:59.253 10:11:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:19:59.253 10:11:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:59.253 10:11:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:59.253 10:11:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:59.253 10:11:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:59.253 10:11:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:19:59.253 10:11:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:59.253 10:11:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:59.253 10:11:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:59.253 10:11:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:19:59.253 10:11:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:59.254 10:11:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:59.254 { 00:19:59.254 "subsystems": [ 00:19:59.254 { 00:19:59.254 "subsystem": "bdev", 00:19:59.254 "config": [ 00:19:59.254 { 00:19:59.254 "params": { 00:19:59.254 "io_mechanism": "libaio", 00:19:59.254 "conserve_cpu": true, 00:19:59.254 "filename": "/dev/nvme0n1", 00:19:59.254 "name": "xnvme_bdev" 00:19:59.254 }, 00:19:59.254 "method": "bdev_xnvme_create" 00:19:59.254 }, 00:19:59.254 { 00:19:59.254 "method": "bdev_wait_for_examine" 00:19:59.254 } 00:19:59.254 ] 00:19:59.254 } 00:19:59.254 ] 00:19:59.254 } 00:19:59.254 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:19:59.254 fio-3.35 00:19:59.254 Starting 1 thread 00:20:05.820 00:20:05.820 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71696: Mon Dec 9 10:11:35 2024 00:20:05.820 read: IOPS=21.6k, BW=84.5MiB/s (88.6MB/s)(423MiB/5001msec) 00:20:05.820 slat (usec): min=5, max=740, avg=41.74, stdev=27.31 00:20:05.820 clat (usec): min=56, max=7117, avg=1590.71, stdev=871.65 00:20:05.820 lat (usec): min=78, max=7264, avg=1632.45, stdev=874.36 00:20:05.820 clat percentiles (usec): 00:20:05.820 | 1.00th=[ 241], 5.00th=[ 363], 10.00th=[ 494], 20.00th=[ 758], 00:20:05.820 | 30.00th=[ 1004], 40.00th=[ 1254], 50.00th=[ 1500], 60.00th=[ 1762], 00:20:05.820 | 70.00th=[ 2057], 80.00th=[ 2376], 90.00th=[ 2769], 95.00th=[ 3064], 00:20:05.820 | 99.00th=[ 3818], 99.50th=[ 4293], 99.90th=[ 5080], 99.95th=[ 5342], 00:20:05.820 | 99.99th=[ 6063] 00:20:05.820 bw ( KiB/s): min=80528, max=92144, per=99.98%, avg=86533.67, stdev=4168.46, samples=9 00:20:05.820 iops : min=20132, max=23036, avg=21633.33, stdev=1042.18, samples=9 00:20:05.820 lat (usec) : 100=0.01%, 250=1.21%, 500=9.01%, 750=9.63%, 1000=9.87% 00:20:05.820 lat (msec) : 2=38.28%, 4=31.24%, 10=0.76% 00:20:05.820 cpu : usr=22.26%, sys=54.76%, ctx=86, majf=0, minf=634 00:20:05.820 IO depths : 1=0.1%, 2=1.8%, 4=5.9%, 8=12.6%, 16=25.7%, 32=52.3%, >=64=1.6% 00:20:05.820 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:05.820 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:20:05.820 issued rwts: total=108214,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:05.820 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:05.820 00:20:05.820 Run status group 0 (all jobs): 00:20:05.820 READ: bw=84.5MiB/s (88.6MB/s), 84.5MiB/s-84.5MiB/s (88.6MB/s-88.6MB/s), io=423MiB (443MB), run=5001-5001msec 00:20:06.757 ----------------------------------------------------- 00:20:06.757 Suppressions used: 00:20:06.757 count bytes template 00:20:06.757 1 11 /usr/src/fio/parse.c 00:20:06.757 1 8 libtcmalloc_minimal.so 00:20:06.757 1 904 libcrypto.so 00:20:06.757 ----------------------------------------------------- 00:20:06.757 00:20:06.757 10:11:37 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:06.757 10:11:37 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:06.757 10:11:37 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:20:06.757 10:11:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:06.757 10:11:37 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:20:06.757 10:11:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:06.757 10:11:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:20:06.757 10:11:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:06.757 10:11:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:06.757 10:11:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:06.757 10:11:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:20:06.757 10:11:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:06.757 10:11:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:06.757 10:11:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:06.757 10:11:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:20:06.757 10:11:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:06.757 10:11:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:06.757 10:11:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:06.757 10:11:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:20:06.757 10:11:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:06.757 10:11:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:06.757 { 00:20:06.757 "subsystems": [ 00:20:06.757 { 00:20:06.757 "subsystem": "bdev", 00:20:06.757 "config": [ 00:20:06.757 { 00:20:06.757 "params": { 00:20:06.757 "io_mechanism": "libaio", 00:20:06.757 "conserve_cpu": true, 00:20:06.758 "filename": "/dev/nvme0n1", 00:20:06.758 "name": "xnvme_bdev" 00:20:06.758 }, 00:20:06.758 "method": "bdev_xnvme_create" 00:20:06.758 }, 00:20:06.758 { 00:20:06.758 "method": "bdev_wait_for_examine" 00:20:06.758 } 00:20:06.758 ] 00:20:06.758 } 00:20:06.758 ] 00:20:06.758 } 00:20:07.017 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:20:07.017 fio-3.35 00:20:07.017 Starting 1 thread 00:20:13.582 00:20:13.582 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71788: Mon Dec 9 10:11:43 2024 00:20:13.582 write: IOPS=24.4k, BW=95.3MiB/s (99.9MB/s)(476MiB/5001msec); 0 zone resets 00:20:13.582 slat (usec): min=4, max=656, avg=36.53, stdev=29.12 00:20:13.582 clat (usec): min=86, max=6526, avg=1443.18, stdev=794.79 00:20:13.582 lat (usec): min=142, max=6594, avg=1479.71, stdev=798.11 00:20:13.582 clat percentiles (usec): 00:20:13.582 | 1.00th=[ 251], 5.00th=[ 367], 10.00th=[ 486], 20.00th=[ 709], 00:20:13.582 | 30.00th=[ 914], 40.00th=[ 1106], 50.00th=[ 1319], 60.00th=[ 1549], 00:20:13.582 | 70.00th=[ 1844], 80.00th=[ 2147], 90.00th=[ 2540], 95.00th=[ 2835], 00:20:13.582 | 99.00th=[ 3556], 99.50th=[ 3916], 99.90th=[ 4621], 99.95th=[ 4883], 00:20:13.582 | 99.99th=[ 5538] 00:20:13.582 bw ( KiB/s): min=84031, max=120584, per=100.00%, avg=98723.44, stdev=11344.12, samples=9 00:20:13.582 iops : min=21007, max=30146, avg=24680.78, stdev=2836.15, samples=9 00:20:13.582 lat (usec) : 100=0.01%, 250=0.99%, 500=9.75%, 750=11.37%, 1000=12.28% 00:20:13.582 lat (msec) : 2=40.82%, 4=24.37%, 10=0.42% 00:20:13.582 cpu : usr=25.04%, sys=52.94%, ctx=79, majf=0, minf=765 00:20:13.582 IO depths : 1=0.1%, 2=1.5%, 4=5.3%, 8=12.2%, 16=25.6%, 32=53.6%, >=64=1.7% 00:20:13.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.582 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:20:13.582 issued rwts: total=0,121980,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:13.582 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:13.582 00:20:13.582 Run status group 0 (all jobs): 00:20:13.582 WRITE: bw=95.3MiB/s (99.9MB/s), 95.3MiB/s-95.3MiB/s (99.9MB/s-99.9MB/s), io=476MiB (500MB), run=5001-5001msec 00:20:14.521 ----------------------------------------------------- 00:20:14.521 Suppressions used: 00:20:14.521 count bytes template 00:20:14.521 1 11 /usr/src/fio/parse.c 00:20:14.521 1 8 libtcmalloc_minimal.so 00:20:14.521 1 904 libcrypto.so 00:20:14.521 ----------------------------------------------------- 00:20:14.521 00:20:14.521 ************************************ 00:20:14.521 END TEST xnvme_fio_plugin 00:20:14.521 ************************************ 00:20:14.521 00:20:14.521 real 0m15.305s 00:20:14.521 user 0m6.463s 00:20:14.521 sys 0m6.254s 00:20:14.521 10:11:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:14.521 10:11:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:20:14.521 10:11:45 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:20:14.521 10:11:45 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:20:14.521 10:11:45 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:20:14.521 10:11:45 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:20:14.521 10:11:45 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:20:14.521 10:11:45 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:20:14.521 10:11:45 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:20:14.521 10:11:45 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:20:14.521 10:11:45 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:20:14.521 10:11:45 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:14.521 10:11:45 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:14.521 10:11:45 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:14.521 ************************************ 00:20:14.521 START TEST xnvme_rpc 00:20:14.521 ************************************ 00:20:14.521 10:11:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:20:14.521 10:11:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:20:14.521 10:11:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:20:14.521 10:11:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:20:14.521 10:11:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:20:14.521 10:11:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71880 00:20:14.521 10:11:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71880 00:20:14.521 10:11:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71880 ']' 00:20:14.521 10:11:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:14.521 10:11:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:14.521 10:11:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:14.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:14.521 10:11:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:14.521 10:11:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:14.521 10:11:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:14.521 [2024-12-09 10:11:45.259599] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:20:14.521 [2024-12-09 10:11:45.260044] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71880 ] 00:20:14.781 [2024-12-09 10:11:45.439948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:14.781 [2024-12-09 10:11:45.576683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:15.718 10:11:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:15.718 10:11:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:20:15.718 10:11:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:20:15.718 10:11:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.718 10:11:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:15.718 xnvme_bdev 00:20:15.718 10:11:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.718 10:11:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:20:15.718 10:11:46 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:15.718 10:11:46 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:20:15.718 10:11:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.718 10:11:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:15.718 10:11:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.977 10:11:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:20:15.977 10:11:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:20:15.977 10:11:46 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:15.977 10:11:46 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:20:15.977 10:11:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.977 10:11:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:15.977 10:11:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.977 10:11:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:20:15.977 10:11:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:20:15.977 10:11:46 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:20:15.977 10:11:46 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:15.977 10:11:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.977 10:11:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:15.977 10:11:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.977 10:11:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:20:15.977 10:11:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:20:15.977 10:11:46 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:15.977 10:11:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.977 10:11:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:15.977 10:11:46 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:20:15.977 10:11:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.977 10:11:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:20:15.977 10:11:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:20:15.977 10:11:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.977 10:11:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:15.977 10:11:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.977 10:11:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71880 00:20:15.977 10:11:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71880 ']' 00:20:15.977 10:11:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71880 00:20:15.977 10:11:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:20:15.977 10:11:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:15.977 10:11:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71880 00:20:15.977 killing process with pid 71880 00:20:15.977 10:11:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:15.977 10:11:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:15.977 10:11:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71880' 00:20:15.977 10:11:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71880 00:20:15.977 10:11:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71880 00:20:18.512 00:20:18.512 real 0m3.892s 00:20:18.512 user 0m3.924s 00:20:18.512 sys 0m0.691s 00:20:18.512 10:11:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:18.512 ************************************ 00:20:18.512 END TEST xnvme_rpc 00:20:18.512 ************************************ 00:20:18.512 10:11:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:18.512 10:11:49 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:20:18.512 10:11:49 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:18.512 10:11:49 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:18.512 10:11:49 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:18.512 ************************************ 00:20:18.512 START TEST xnvme_bdevperf 00:20:18.512 ************************************ 00:20:18.512 10:11:49 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:20:18.512 10:11:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:20:18.512 10:11:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:20:18.512 10:11:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:18.512 10:11:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:20:18.512 10:11:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:20:18.512 10:11:49 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:20:18.512 10:11:49 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:18.512 { 00:20:18.512 "subsystems": [ 00:20:18.512 { 00:20:18.512 "subsystem": "bdev", 00:20:18.512 "config": [ 00:20:18.512 { 00:20:18.512 "params": { 00:20:18.512 "io_mechanism": "io_uring", 00:20:18.512 "conserve_cpu": false, 00:20:18.512 "filename": "/dev/nvme0n1", 00:20:18.512 "name": "xnvme_bdev" 00:20:18.512 }, 00:20:18.512 "method": "bdev_xnvme_create" 00:20:18.512 }, 00:20:18.512 { 00:20:18.512 "method": "bdev_wait_for_examine" 00:20:18.512 } 00:20:18.512 ] 00:20:18.512 } 00:20:18.512 ] 00:20:18.512 } 00:20:18.512 [2024-12-09 10:11:49.209547] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:20:18.512 [2024-12-09 10:11:49.209746] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71965 ] 00:20:18.772 [2024-12-09 10:11:49.400310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.772 [2024-12-09 10:11:49.547030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:19.347 Running I/O for 5 seconds... 00:20:21.229 47431.00 IOPS, 185.28 MiB/s [2024-12-09T10:11:52.967Z] 47672.00 IOPS, 186.22 MiB/s [2024-12-09T10:11:54.341Z] 47780.00 IOPS, 186.64 MiB/s [2024-12-09T10:11:55.277Z] 47649.75 IOPS, 186.13 MiB/s [2024-12-09T10:11:55.277Z] 47243.20 IOPS, 184.54 MiB/s 00:20:24.480 Latency(us) 00:20:24.480 [2024-12-09T10:11:55.277Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:24.480 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:20:24.480 xnvme_bdev : 5.00 47209.52 184.41 0.00 0.00 1350.62 428.22 9234.62 00:20:24.480 [2024-12-09T10:11:55.277Z] =================================================================================================================== 00:20:24.480 [2024-12-09T10:11:55.277Z] Total : 47209.52 184.41 0.00 0.00 1350.62 428.22 9234.62 00:20:25.415 10:11:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:25.415 10:11:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:20:25.415 10:11:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:20:25.415 10:11:56 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:20:25.415 10:11:56 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:25.415 { 00:20:25.415 "subsystems": [ 00:20:25.415 { 00:20:25.415 "subsystem": "bdev", 00:20:25.415 "config": [ 00:20:25.415 { 00:20:25.415 "params": { 00:20:25.415 "io_mechanism": "io_uring", 00:20:25.415 "conserve_cpu": false, 00:20:25.415 "filename": "/dev/nvme0n1", 00:20:25.415 "name": "xnvme_bdev" 00:20:25.415 }, 00:20:25.415 "method": "bdev_xnvme_create" 00:20:25.415 }, 00:20:25.415 { 00:20:25.415 "method": "bdev_wait_for_examine" 00:20:25.415 } 00:20:25.415 ] 00:20:25.415 } 00:20:25.415 ] 00:20:25.415 } 00:20:25.415 [2024-12-09 10:11:56.150543] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:20:25.415 [2024-12-09 10:11:56.150723] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72040 ] 00:20:25.674 [2024-12-09 10:11:56.337218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.674 [2024-12-09 10:11:56.453405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:26.240 Running I/O for 5 seconds... 00:20:28.109 41409.00 IOPS, 161.75 MiB/s [2024-12-09T10:11:59.843Z] 40832.50 IOPS, 159.50 MiB/s [2024-12-09T10:12:00.805Z] 40747.00 IOPS, 159.17 MiB/s [2024-12-09T10:12:02.181Z] 41199.75 IOPS, 160.94 MiB/s [2024-12-09T10:12:02.181Z] 41407.80 IOPS, 161.75 MiB/s 00:20:31.384 Latency(us) 00:20:31.384 [2024-12-09T10:12:02.181Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:31.384 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:20:31.384 xnvme_bdev : 5.01 41373.48 161.62 0.00 0.00 1541.28 614.40 5659.93 00:20:31.384 [2024-12-09T10:12:02.181Z] =================================================================================================================== 00:20:31.384 [2024-12-09T10:12:02.181Z] Total : 41373.48 161.62 0.00 0.00 1541.28 614.40 5659.93 00:20:32.319 ************************************ 00:20:32.319 END TEST xnvme_bdevperf 00:20:32.319 ************************************ 00:20:32.319 00:20:32.319 real 0m13.891s 00:20:32.319 user 0m7.276s 00:20:32.319 sys 0m6.412s 00:20:32.319 10:12:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:32.319 10:12:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:32.319 10:12:03 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:20:32.319 10:12:03 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:32.319 10:12:03 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:32.319 10:12:03 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:32.319 ************************************ 00:20:32.319 START TEST xnvme_fio_plugin 00:20:32.319 ************************************ 00:20:32.319 10:12:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:20:32.319 10:12:03 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:20:32.319 10:12:03 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:20:32.319 10:12:03 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:32.319 10:12:03 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:32.319 10:12:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:32.319 10:12:03 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:20:32.319 10:12:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:32.319 10:12:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:32.319 10:12:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:32.319 10:12:03 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:20:32.319 10:12:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:32.319 10:12:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:20:32.319 10:12:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:20:32.319 10:12:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:32.319 10:12:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:32.319 10:12:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:32.319 10:12:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:32.319 10:12:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:20:32.319 10:12:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:32.319 10:12:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:32.319 10:12:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:20:32.319 10:12:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:32.319 10:12:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:32.319 { 00:20:32.319 "subsystems": [ 00:20:32.319 { 00:20:32.319 "subsystem": "bdev", 00:20:32.319 "config": [ 00:20:32.319 { 00:20:32.319 "params": { 00:20:32.319 "io_mechanism": "io_uring", 00:20:32.319 "conserve_cpu": false, 00:20:32.319 "filename": "/dev/nvme0n1", 00:20:32.319 "name": "xnvme_bdev" 00:20:32.319 }, 00:20:32.319 "method": "bdev_xnvme_create" 00:20:32.319 }, 00:20:32.319 { 00:20:32.319 "method": "bdev_wait_for_examine" 00:20:32.319 } 00:20:32.319 ] 00:20:32.319 } 00:20:32.319 ] 00:20:32.319 } 00:20:32.578 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:20:32.578 fio-3.35 00:20:32.578 Starting 1 thread 00:20:39.140 00:20:39.140 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72161: Mon Dec 9 10:12:09 2024 00:20:39.140 read: IOPS=46.6k, BW=182MiB/s (191MB/s)(910MiB/5001msec) 00:20:39.140 slat (usec): min=2, max=109, avg= 4.25, stdev= 2.12 00:20:39.140 clat (usec): min=69, max=51801, avg=1210.25, stdev=882.73 00:20:39.140 lat (usec): min=74, max=51805, avg=1214.50, stdev=882.84 00:20:39.140 clat percentiles (usec): 00:20:39.140 | 1.00th=[ 297], 5.00th=[ 947], 10.00th=[ 996], 20.00th=[ 1045], 00:20:39.140 | 30.00th=[ 1090], 40.00th=[ 1123], 50.00th=[ 1156], 60.00th=[ 1188], 00:20:39.140 | 70.00th=[ 1237], 80.00th=[ 1287], 90.00th=[ 1369], 95.00th=[ 1500], 00:20:39.140 | 99.00th=[ 2245], 99.50th=[ 3392], 99.90th=[14484], 99.95th=[23987], 00:20:39.140 | 99.99th=[33817] 00:20:39.140 bw ( KiB/s): min=176128, max=203776, per=100.00%, avg=187940.44, stdev=9944.25, samples=9 00:20:39.140 iops : min=44032, max=50944, avg=46985.11, stdev=2486.06, samples=9 00:20:39.140 lat (usec) : 100=0.08%, 250=0.71%, 500=0.84%, 750=0.41%, 1000=8.91% 00:20:39.140 lat (msec) : 2=87.78%, 4=0.93%, 10=0.19%, 20=0.08%, 50=0.07% 00:20:39.140 lat (msec) : 100=0.01% 00:20:39.140 cpu : usr=36.64%, sys=62.20%, ctx=9, majf=0, minf=762 00:20:39.140 IO depths : 1=1.3%, 2=2.7%, 4=5.7%, 8=11.9%, 16=24.6%, 32=51.8%, >=64=1.9% 00:20:39.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:39.140 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:20:39.140 issued rwts: total=232835,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:39.140 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:39.140 00:20:39.140 Run status group 0 (all jobs): 00:20:39.140 READ: bw=182MiB/s (191MB/s), 182MiB/s-182MiB/s (191MB/s-191MB/s), io=910MiB (954MB), run=5001-5001msec 00:20:40.077 ----------------------------------------------------- 00:20:40.077 Suppressions used: 00:20:40.077 count bytes template 00:20:40.077 1 11 /usr/src/fio/parse.c 00:20:40.077 1 8 libtcmalloc_minimal.so 00:20:40.077 1 904 libcrypto.so 00:20:40.077 ----------------------------------------------------- 00:20:40.077 00:20:40.077 10:12:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:40.077 10:12:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:40.077 10:12:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:20:40.077 10:12:10 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:20:40.077 10:12:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:40.077 10:12:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:20:40.077 10:12:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:40.077 10:12:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:40.077 10:12:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:40.077 10:12:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:40.077 10:12:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:20:40.077 10:12:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:40.077 10:12:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:40.077 10:12:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:40.077 10:12:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:20:40.077 10:12:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:40.077 10:12:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:40.077 10:12:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:40.077 10:12:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:20:40.077 10:12:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:40.077 10:12:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:40.077 { 00:20:40.077 "subsystems": [ 00:20:40.077 { 00:20:40.077 "subsystem": "bdev", 00:20:40.077 "config": [ 00:20:40.077 { 00:20:40.077 "params": { 00:20:40.077 "io_mechanism": "io_uring", 00:20:40.077 "conserve_cpu": false, 00:20:40.077 "filename": "/dev/nvme0n1", 00:20:40.077 "name": "xnvme_bdev" 00:20:40.077 }, 00:20:40.077 "method": "bdev_xnvme_create" 00:20:40.077 }, 00:20:40.077 { 00:20:40.077 "method": "bdev_wait_for_examine" 00:20:40.077 } 00:20:40.077 ] 00:20:40.077 } 00:20:40.077 ] 00:20:40.077 } 00:20:40.077 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:20:40.077 fio-3.35 00:20:40.077 Starting 1 thread 00:20:46.643 00:20:46.643 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72258: Mon Dec 9 10:12:16 2024 00:20:46.643 write: IOPS=45.1k, BW=176MiB/s (185MB/s)(881MiB/5001msec); 0 zone resets 00:20:46.643 slat (usec): min=2, max=103, avg= 4.73, stdev= 2.33 00:20:46.643 clat (usec): min=386, max=3380, avg=1231.67, stdev=164.81 00:20:46.643 lat (usec): min=390, max=3388, avg=1236.40, stdev=165.70 00:20:46.643 clat percentiles (usec): 00:20:46.643 | 1.00th=[ 963], 5.00th=[ 1020], 10.00th=[ 1057], 20.00th=[ 1106], 00:20:46.643 | 30.00th=[ 1139], 40.00th=[ 1172], 50.00th=[ 1205], 60.00th=[ 1254], 00:20:46.643 | 70.00th=[ 1287], 80.00th=[ 1336], 90.00th=[ 1418], 95.00th=[ 1565], 00:20:46.643 | 99.00th=[ 1778], 99.50th=[ 1844], 99.90th=[ 2024], 99.95th=[ 2147], 00:20:46.643 | 99.99th=[ 2573] 00:20:46.643 bw ( KiB/s): min=171008, max=197632, per=100.00%, avg=180414.22, stdev=7442.06, samples=9 00:20:46.643 iops : min=42752, max=49408, avg=45103.56, stdev=1860.52, samples=9 00:20:46.643 lat (usec) : 500=0.01%, 750=0.01%, 1000=3.04% 00:20:46.643 lat (msec) : 2=96.84%, 4=0.12% 00:20:46.643 cpu : usr=40.04%, sys=58.84%, ctx=13, majf=0, minf=763 00:20:46.643 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:20:46.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.643 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.0%, 64=1.5%, >=64=0.0% 00:20:46.643 issued rwts: total=0,225558,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.643 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:46.643 00:20:46.643 Run status group 0 (all jobs): 00:20:46.643 WRITE: bw=176MiB/s (185MB/s), 176MiB/s-176MiB/s (185MB/s-185MB/s), io=881MiB (924MB), run=5001-5001msec 00:20:47.242 ----------------------------------------------------- 00:20:47.242 Suppressions used: 00:20:47.242 count bytes template 00:20:47.242 1 11 /usr/src/fio/parse.c 00:20:47.242 1 8 libtcmalloc_minimal.so 00:20:47.242 1 904 libcrypto.so 00:20:47.242 ----------------------------------------------------- 00:20:47.242 00:20:47.506 00:20:47.506 real 0m15.020s 00:20:47.506 user 0m7.768s 00:20:47.506 sys 0m6.851s 00:20:47.506 ************************************ 00:20:47.506 END TEST xnvme_fio_plugin 00:20:47.506 ************************************ 00:20:47.506 10:12:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:47.506 10:12:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:20:47.506 10:12:18 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:20:47.506 10:12:18 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:20:47.506 10:12:18 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:20:47.506 10:12:18 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:20:47.506 10:12:18 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:47.506 10:12:18 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:47.506 10:12:18 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:47.506 ************************************ 00:20:47.506 START TEST xnvme_rpc 00:20:47.506 ************************************ 00:20:47.506 10:12:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:20:47.506 10:12:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:20:47.506 10:12:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:20:47.506 10:12:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:20:47.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:47.506 10:12:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:20:47.506 10:12:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72345 00:20:47.506 10:12:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72345 00:20:47.506 10:12:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:47.506 10:12:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72345 ']' 00:20:47.506 10:12:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:47.506 10:12:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:47.506 10:12:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:47.506 10:12:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:47.506 10:12:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:47.506 [2024-12-09 10:12:18.243586] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:20:47.506 [2024-12-09 10:12:18.244125] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72345 ] 00:20:47.766 [2024-12-09 10:12:18.432252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:48.025 [2024-12-09 10:12:18.573462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:48.962 10:12:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:48.962 10:12:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:20:48.962 10:12:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:20:48.962 10:12:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.962 10:12:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:48.962 xnvme_bdev 00:20:48.962 10:12:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.962 10:12:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:20:48.962 10:12:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:48.962 10:12:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:20:48.962 10:12:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.962 10:12:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:48.962 10:12:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.962 10:12:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:20:48.962 10:12:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:20:48.963 10:12:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:48.963 10:12:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:20:48.963 10:12:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.963 10:12:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:48.963 10:12:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.963 10:12:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:20:48.963 10:12:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:20:48.963 10:12:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:48.963 10:12:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.963 10:12:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:48.963 10:12:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:20:48.963 10:12:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.963 10:12:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:20:48.963 10:12:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:20:48.963 10:12:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:48.963 10:12:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.963 10:12:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:48.963 10:12:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:20:48.963 10:12:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.963 10:12:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:20:48.963 10:12:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:20:48.963 10:12:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.963 10:12:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:48.963 10:12:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.963 10:12:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72345 00:20:48.963 10:12:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72345 ']' 00:20:48.963 10:12:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72345 00:20:48.963 10:12:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:20:48.963 10:12:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:48.963 10:12:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72345 00:20:48.963 killing process with pid 72345 00:20:48.963 10:12:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:48.963 10:12:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:48.963 10:12:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72345' 00:20:48.963 10:12:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72345 00:20:48.963 10:12:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72345 00:20:51.497 00:20:51.497 real 0m3.920s 00:20:51.497 user 0m3.996s 00:20:51.497 sys 0m0.667s 00:20:51.497 10:12:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:51.497 ************************************ 00:20:51.497 END TEST xnvme_rpc 00:20:51.497 ************************************ 00:20:51.497 10:12:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:51.497 10:12:22 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:20:51.497 10:12:22 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:51.497 10:12:22 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:51.497 10:12:22 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:51.497 ************************************ 00:20:51.497 START TEST xnvme_bdevperf 00:20:51.497 ************************************ 00:20:51.497 10:12:22 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:20:51.497 10:12:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:20:51.497 10:12:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:20:51.497 10:12:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:51.497 10:12:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:20:51.497 10:12:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:20:51.497 10:12:22 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:20:51.497 10:12:22 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:51.497 { 00:20:51.497 "subsystems": [ 00:20:51.497 { 00:20:51.497 "subsystem": "bdev", 00:20:51.497 "config": [ 00:20:51.497 { 00:20:51.497 "params": { 00:20:51.497 "io_mechanism": "io_uring", 00:20:51.497 "conserve_cpu": true, 00:20:51.497 "filename": "/dev/nvme0n1", 00:20:51.497 "name": "xnvme_bdev" 00:20:51.497 }, 00:20:51.497 "method": "bdev_xnvme_create" 00:20:51.497 }, 00:20:51.497 { 00:20:51.497 "method": "bdev_wait_for_examine" 00:20:51.497 } 00:20:51.497 ] 00:20:51.497 } 00:20:51.497 ] 00:20:51.497 } 00:20:51.497 [2024-12-09 10:12:22.171503] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:20:51.497 [2024-12-09 10:12:22.171697] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72431 ] 00:20:51.756 [2024-12-09 10:12:22.343858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.756 [2024-12-09 10:12:22.481463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:52.324 Running I/O for 5 seconds... 00:20:54.194 47936.00 IOPS, 187.25 MiB/s [2024-12-09T10:12:25.925Z] 47423.00 IOPS, 185.25 MiB/s [2024-12-09T10:12:26.859Z] 47903.33 IOPS, 187.12 MiB/s [2024-12-09T10:12:27.883Z] 49047.50 IOPS, 191.59 MiB/s [2024-12-09T10:12:27.883Z] 49247.60 IOPS, 192.37 MiB/s 00:20:57.086 Latency(us) 00:20:57.086 [2024-12-09T10:12:27.883Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:57.086 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:20:57.086 xnvme_bdev : 5.00 49233.16 192.32 0.00 0.00 1295.88 729.83 4140.68 00:20:57.086 [2024-12-09T10:12:27.883Z] =================================================================================================================== 00:20:57.086 [2024-12-09T10:12:27.883Z] Total : 49233.16 192.32 0.00 0.00 1295.88 729.83 4140.68 00:20:58.464 10:12:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:58.464 10:12:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:20:58.464 10:12:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:20:58.464 10:12:28 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:20:58.464 10:12:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:58.464 { 00:20:58.464 "subsystems": [ 00:20:58.464 { 00:20:58.464 "subsystem": "bdev", 00:20:58.464 "config": [ 00:20:58.464 { 00:20:58.464 "params": { 00:20:58.464 "io_mechanism": "io_uring", 00:20:58.464 "conserve_cpu": true, 00:20:58.465 "filename": "/dev/nvme0n1", 00:20:58.465 "name": "xnvme_bdev" 00:20:58.465 }, 00:20:58.465 "method": "bdev_xnvme_create" 00:20:58.465 }, 00:20:58.465 { 00:20:58.465 "method": "bdev_wait_for_examine" 00:20:58.465 } 00:20:58.465 ] 00:20:58.465 } 00:20:58.465 ] 00:20:58.465 } 00:20:58.465 [2024-12-09 10:12:29.072653] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:20:58.465 [2024-12-09 10:12:29.073023] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72515 ] 00:20:58.465 [2024-12-09 10:12:29.247152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.724 [2024-12-09 10:12:29.384430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:58.983 Running I/O for 5 seconds... 00:21:01.295 43712.00 IOPS, 170.75 MiB/s [2024-12-09T10:12:33.027Z] 42528.00 IOPS, 166.12 MiB/s [2024-12-09T10:12:33.961Z] 42197.33 IOPS, 164.83 MiB/s [2024-12-09T10:12:34.905Z] 42288.00 IOPS, 165.19 MiB/s 00:21:04.108 Latency(us) 00:21:04.108 [2024-12-09T10:12:34.905Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:04.108 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:21:04.108 xnvme_bdev : 5.00 42271.82 165.12 0.00 0.00 1508.43 606.95 7000.44 00:21:04.108 [2024-12-09T10:12:34.905Z] =================================================================================================================== 00:21:04.108 [2024-12-09T10:12:34.905Z] Total : 42271.82 165.12 0.00 0.00 1508.43 606.95 7000.44 00:21:05.485 00:21:05.485 real 0m13.855s 00:21:05.485 user 0m8.450s 00:21:05.485 sys 0m4.870s 00:21:05.485 10:12:35 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:05.485 10:12:35 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:05.485 ************************************ 00:21:05.485 END TEST xnvme_bdevperf 00:21:05.485 ************************************ 00:21:05.485 10:12:35 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:21:05.485 10:12:35 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:05.485 10:12:35 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:05.485 10:12:35 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:05.485 ************************************ 00:21:05.485 START TEST xnvme_fio_plugin 00:21:05.485 ************************************ 00:21:05.485 10:12:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:21:05.485 10:12:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:21:05.485 10:12:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:21:05.485 10:12:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:21:05.485 10:12:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:05.485 10:12:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:21:05.485 10:12:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:05.485 10:12:35 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:21:05.485 10:12:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:05.485 10:12:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:21:05.485 10:12:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:05.485 10:12:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:05.485 10:12:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:05.485 10:12:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:21:05.485 10:12:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:05.485 10:12:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:05.485 10:12:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:05.485 10:12:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:21:05.485 10:12:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:05.485 10:12:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:05.485 10:12:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:05.485 10:12:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:21:05.485 10:12:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:05.485 10:12:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:05.485 { 00:21:05.485 "subsystems": [ 00:21:05.485 { 00:21:05.485 "subsystem": "bdev", 00:21:05.485 "config": [ 00:21:05.485 { 00:21:05.485 "params": { 00:21:05.485 "io_mechanism": "io_uring", 00:21:05.485 "conserve_cpu": true, 00:21:05.485 "filename": "/dev/nvme0n1", 00:21:05.485 "name": "xnvme_bdev" 00:21:05.485 }, 00:21:05.485 "method": "bdev_xnvme_create" 00:21:05.485 }, 00:21:05.485 { 00:21:05.485 "method": "bdev_wait_for_examine" 00:21:05.485 } 00:21:05.485 ] 00:21:05.485 } 00:21:05.485 ] 00:21:05.485 } 00:21:05.485 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:21:05.485 fio-3.35 00:21:05.485 Starting 1 thread 00:21:12.051 00:21:12.051 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72631: Mon Dec 9 10:12:42 2024 00:21:12.051 read: IOPS=48.4k, BW=189MiB/s (198MB/s)(945MiB/5001msec) 00:21:12.051 slat (nsec): min=2466, max=67749, avg=4130.71, stdev=2200.10 00:21:12.051 clat (usec): min=616, max=2633, avg=1156.95, stdev=153.25 00:21:12.051 lat (usec): min=619, max=2675, avg=1161.08, stdev=153.95 00:21:12.051 clat percentiles (usec): 00:21:12.051 | 1.00th=[ 898], 5.00th=[ 955], 10.00th=[ 988], 20.00th=[ 1037], 00:21:12.051 | 30.00th=[ 1074], 40.00th=[ 1106], 50.00th=[ 1139], 60.00th=[ 1172], 00:21:12.051 | 70.00th=[ 1205], 80.00th=[ 1254], 90.00th=[ 1336], 95.00th=[ 1401], 00:21:12.051 | 99.00th=[ 1713], 99.50th=[ 1811], 99.90th=[ 2008], 99.95th=[ 2147], 00:21:12.051 | 99.99th=[ 2507] 00:21:12.051 bw ( KiB/s): min=178176, max=213248, per=99.71%, avg=192881.78, stdev=11544.94, samples=9 00:21:12.051 iops : min=44544, max=53312, avg=48220.44, stdev=2886.24, samples=9 00:21:12.051 lat (usec) : 750=0.03%, 1000=11.71% 00:21:12.051 lat (msec) : 2=88.16%, 4=0.10% 00:21:12.051 cpu : usr=49.96%, sys=45.52%, ctx=13, majf=0, minf=762 00:21:12.051 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:21:12.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:12.051 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:21:12.051 issued rwts: total=241856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:12.051 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:12.051 00:21:12.051 Run status group 0 (all jobs): 00:21:12.051 READ: bw=189MiB/s (198MB/s), 189MiB/s-189MiB/s (198MB/s-198MB/s), io=945MiB (991MB), run=5001-5001msec 00:21:12.988 ----------------------------------------------------- 00:21:12.988 Suppressions used: 00:21:12.988 count bytes template 00:21:12.988 1 11 /usr/src/fio/parse.c 00:21:12.988 1 8 libtcmalloc_minimal.so 00:21:12.988 1 904 libcrypto.so 00:21:12.988 ----------------------------------------------------- 00:21:12.988 00:21:12.988 10:12:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:21:12.988 10:12:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:21:12.988 10:12:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:12.988 10:12:43 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:21:12.988 10:12:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:21:12.988 10:12:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:12.988 10:12:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:12.988 10:12:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:12.988 10:12:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:12.988 10:12:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:12.988 10:12:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:21:12.988 10:12:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:12.988 10:12:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:12.988 10:12:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:12.988 10:12:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:21:12.988 10:12:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:12.988 10:12:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:12.988 10:12:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:12.988 10:12:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:21:12.988 10:12:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:12.988 10:12:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:12.988 { 00:21:12.988 "subsystems": [ 00:21:12.988 { 00:21:12.988 "subsystem": "bdev", 00:21:12.988 "config": [ 00:21:12.988 { 00:21:12.988 "params": { 00:21:12.988 "io_mechanism": "io_uring", 00:21:12.988 "conserve_cpu": true, 00:21:12.988 "filename": "/dev/nvme0n1", 00:21:12.988 "name": "xnvme_bdev" 00:21:12.988 }, 00:21:12.988 "method": "bdev_xnvme_create" 00:21:12.988 }, 00:21:12.988 { 00:21:12.988 "method": "bdev_wait_for_examine" 00:21:12.988 } 00:21:12.988 ] 00:21:12.988 } 00:21:12.988 ] 00:21:12.988 } 00:21:13.247 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:21:13.247 fio-3.35 00:21:13.247 Starting 1 thread 00:21:19.810 00:21:19.810 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72734: Mon Dec 9 10:12:49 2024 00:21:19.810 write: IOPS=45.8k, BW=179MiB/s (188MB/s)(895MiB/5002msec); 0 zone resets 00:21:19.810 slat (nsec): min=2493, max=80096, avg=4504.81, stdev=2411.84 00:21:19.810 clat (usec): min=802, max=2278, avg=1218.07, stdev=156.30 00:21:19.810 lat (usec): min=805, max=2300, avg=1222.57, stdev=156.93 00:21:19.810 clat percentiles (usec): 00:21:19.810 | 1.00th=[ 947], 5.00th=[ 1004], 10.00th=[ 1045], 20.00th=[ 1090], 00:21:19.810 | 30.00th=[ 1139], 40.00th=[ 1172], 50.00th=[ 1205], 60.00th=[ 1237], 00:21:19.810 | 70.00th=[ 1270], 80.00th=[ 1319], 90.00th=[ 1401], 95.00th=[ 1500], 00:21:19.810 | 99.00th=[ 1762], 99.50th=[ 1827], 99.90th=[ 1975], 99.95th=[ 2024], 00:21:19.810 | 99.99th=[ 2147] 00:21:19.810 bw ( KiB/s): min=175616, max=194048, per=100.00%, avg=183978.67, stdev=7542.23, samples=9 00:21:19.810 iops : min=43904, max=48512, avg=45994.67, stdev=1885.56, samples=9 00:21:19.810 lat (usec) : 1000=4.42% 00:21:19.810 lat (msec) : 2=95.51%, 4=0.07% 00:21:19.810 cpu : usr=49.15%, sys=46.45%, ctx=15, majf=0, minf=763 00:21:19.810 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:21:19.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:19.810 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:21:19.810 issued rwts: total=0,229056,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:19.810 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:19.810 00:21:19.810 Run status group 0 (all jobs): 00:21:19.810 WRITE: bw=179MiB/s (188MB/s), 179MiB/s-179MiB/s (188MB/s-188MB/s), io=895MiB (938MB), run=5002-5002msec 00:21:20.377 ----------------------------------------------------- 00:21:20.377 Suppressions used: 00:21:20.377 count bytes template 00:21:20.377 1 11 /usr/src/fio/parse.c 00:21:20.377 1 8 libtcmalloc_minimal.so 00:21:20.377 1 904 libcrypto.so 00:21:20.377 ----------------------------------------------------- 00:21:20.377 00:21:20.377 00:21:20.377 real 0m15.077s 00:21:20.377 user 0m8.914s 00:21:20.377 sys 0m5.430s 00:21:20.377 10:12:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:20.377 10:12:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:21:20.377 ************************************ 00:21:20.377 END TEST xnvme_fio_plugin 00:21:20.377 ************************************ 00:21:20.377 10:12:51 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:21:20.377 10:12:51 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:21:20.377 10:12:51 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:21:20.377 10:12:51 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:21:20.377 10:12:51 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:21:20.377 10:12:51 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:21:20.377 10:12:51 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:21:20.377 10:12:51 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:21:20.377 10:12:51 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:21:20.377 10:12:51 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:20.377 10:12:51 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:20.377 10:12:51 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:20.377 ************************************ 00:21:20.377 START TEST xnvme_rpc 00:21:20.377 ************************************ 00:21:20.377 10:12:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:21:20.377 10:12:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:21:20.377 10:12:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:21:20.377 10:12:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:21:20.377 10:12:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:21:20.377 10:12:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72820 00:21:20.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:20.377 10:12:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72820 00:21:20.377 10:12:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:20.377 10:12:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72820 ']' 00:21:20.377 10:12:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:20.377 10:12:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:20.377 10:12:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:20.377 10:12:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:20.377 10:12:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:20.636 [2024-12-09 10:12:51.240804] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:21:20.636 [2024-12-09 10:12:51.241071] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72820 ] 00:21:20.636 [2024-12-09 10:12:51.415013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:20.895 [2024-12-09 10:12:51.548808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:21.830 10:12:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:21.830 10:12:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:21:21.830 10:12:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:21:21.830 10:12:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.830 10:12:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:21.830 xnvme_bdev 00:21:21.830 10:12:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.830 10:12:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:21:21.830 10:12:52 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:21:21.830 10:12:52 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:21:21.830 10:12:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.830 10:12:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:21.830 10:12:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.830 10:12:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:21:21.830 10:12:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:21:21.830 10:12:52 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:21:21.830 10:12:52 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:21:21.830 10:12:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.830 10:12:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:21.830 10:12:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.830 10:12:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:21:21.830 10:12:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:21:21.830 10:12:52 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:21:21.830 10:12:52 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:21:21.830 10:12:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.830 10:12:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:21.830 10:12:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.089 10:12:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:21:22.089 10:12:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:21:22.089 10:12:52 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:21:22.089 10:12:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.089 10:12:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:22.089 10:12:52 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:21:22.089 10:12:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.089 10:12:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:21:22.089 10:12:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:21:22.089 10:12:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.089 10:12:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:22.089 10:12:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.089 10:12:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72820 00:21:22.089 10:12:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72820 ']' 00:21:22.089 10:12:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72820 00:21:22.089 10:12:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:21:22.089 10:12:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:22.089 10:12:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72820 00:21:22.089 killing process with pid 72820 00:21:22.089 10:12:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:22.089 10:12:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:22.089 10:12:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72820' 00:21:22.089 10:12:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72820 00:21:22.089 10:12:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72820 00:21:24.620 ************************************ 00:21:24.620 END TEST xnvme_rpc 00:21:24.620 ************************************ 00:21:24.620 00:21:24.620 real 0m3.712s 00:21:24.620 user 0m3.848s 00:21:24.620 sys 0m0.635s 00:21:24.620 10:12:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:24.620 10:12:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:24.620 10:12:54 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:21:24.620 10:12:54 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:24.620 10:12:54 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:24.620 10:12:54 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:24.620 ************************************ 00:21:24.620 START TEST xnvme_bdevperf 00:21:24.620 ************************************ 00:21:24.620 10:12:54 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:21:24.620 10:12:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:21:24.620 10:12:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:21:24.620 10:12:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:21:24.620 10:12:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:21:24.620 10:12:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:21:24.620 10:12:54 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:21:24.620 10:12:54 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:24.620 { 00:21:24.620 "subsystems": [ 00:21:24.620 { 00:21:24.620 "subsystem": "bdev", 00:21:24.620 "config": [ 00:21:24.620 { 00:21:24.620 "params": { 00:21:24.620 "io_mechanism": "io_uring_cmd", 00:21:24.620 "conserve_cpu": false, 00:21:24.620 "filename": "/dev/ng0n1", 00:21:24.620 "name": "xnvme_bdev" 00:21:24.620 }, 00:21:24.620 "method": "bdev_xnvme_create" 00:21:24.621 }, 00:21:24.621 { 00:21:24.621 "method": "bdev_wait_for_examine" 00:21:24.621 } 00:21:24.621 ] 00:21:24.621 } 00:21:24.621 ] 00:21:24.621 } 00:21:24.621 [2024-12-09 10:12:55.011766] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:21:24.621 [2024-12-09 10:12:55.011983] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72904 ] 00:21:24.621 [2024-12-09 10:12:55.207057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:24.621 [2024-12-09 10:12:55.362467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:25.187 Running I/O for 5 seconds... 00:21:27.074 49227.00 IOPS, 192.29 MiB/s [2024-12-09T10:12:58.807Z] 48933.00 IOPS, 191.14 MiB/s [2024-12-09T10:13:00.184Z] 48323.33 IOPS, 188.76 MiB/s [2024-12-09T10:13:00.751Z] 48194.50 IOPS, 188.26 MiB/s 00:21:29.954 Latency(us) 00:21:29.954 [2024-12-09T10:13:00.751Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:29.954 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:21:29.954 xnvme_bdev : 5.00 48720.62 190.31 0.00 0.00 1309.29 305.34 21567.30 00:21:29.954 [2024-12-09T10:13:00.751Z] =================================================================================================================== 00:21:29.954 [2024-12-09T10:13:00.751Z] Total : 48720.62 190.31 0.00 0.00 1309.29 305.34 21567.30 00:21:31.333 10:13:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:21:31.334 10:13:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:21:31.334 10:13:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:21:31.334 10:13:01 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:21:31.334 10:13:01 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:31.334 { 00:21:31.334 "subsystems": [ 00:21:31.334 { 00:21:31.334 "subsystem": "bdev", 00:21:31.334 "config": [ 00:21:31.334 { 00:21:31.334 "params": { 00:21:31.334 "io_mechanism": "io_uring_cmd", 00:21:31.334 "conserve_cpu": false, 00:21:31.334 "filename": "/dev/ng0n1", 00:21:31.334 "name": "xnvme_bdev" 00:21:31.334 }, 00:21:31.334 "method": "bdev_xnvme_create" 00:21:31.334 }, 00:21:31.334 { 00:21:31.334 "method": "bdev_wait_for_examine" 00:21:31.334 } 00:21:31.334 ] 00:21:31.334 } 00:21:31.334 ] 00:21:31.334 } 00:21:31.334 [2024-12-09 10:13:02.084359] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:21:31.334 [2024-12-09 10:13:02.084598] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72985 ] 00:21:31.592 [2024-12-09 10:13:02.276584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:31.851 [2024-12-09 10:13:02.427341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:32.110 Running I/O for 5 seconds... 00:21:34.427 45120.00 IOPS, 176.25 MiB/s [2024-12-09T10:13:06.162Z] 44896.00 IOPS, 175.38 MiB/s [2024-12-09T10:13:07.097Z] 44309.33 IOPS, 173.08 MiB/s [2024-12-09T10:13:08.033Z] 43888.00 IOPS, 171.44 MiB/s 00:21:37.236 Latency(us) 00:21:37.236 [2024-12-09T10:13:08.033Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:37.236 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:21:37.236 xnvme_bdev : 5.00 44039.24 172.03 0.00 0.00 1448.16 781.96 6255.71 00:21:37.236 [2024-12-09T10:13:08.033Z] =================================================================================================================== 00:21:37.236 [2024-12-09T10:13:08.033Z] Total : 44039.24 172.03 0.00 0.00 1448.16 781.96 6255.71 00:21:38.613 10:13:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:21:38.613 10:13:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:21:38.613 10:13:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:21:38.613 10:13:09 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:21:38.613 10:13:09 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:38.613 { 00:21:38.613 "subsystems": [ 00:21:38.613 { 00:21:38.613 "subsystem": "bdev", 00:21:38.613 "config": [ 00:21:38.613 { 00:21:38.613 "params": { 00:21:38.613 "io_mechanism": "io_uring_cmd", 00:21:38.613 "conserve_cpu": false, 00:21:38.613 "filename": "/dev/ng0n1", 00:21:38.613 "name": "xnvme_bdev" 00:21:38.613 }, 00:21:38.613 "method": "bdev_xnvme_create" 00:21:38.613 }, 00:21:38.613 { 00:21:38.613 "method": "bdev_wait_for_examine" 00:21:38.613 } 00:21:38.613 ] 00:21:38.613 } 00:21:38.613 ] 00:21:38.613 } 00:21:38.613 [2024-12-09 10:13:09.129227] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:21:38.613 [2024-12-09 10:13:09.129398] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73066 ] 00:21:38.613 [2024-12-09 10:13:09.316844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:38.872 [2024-12-09 10:13:09.454265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:39.131 Running I/O for 5 seconds... 00:21:41.030 69248.00 IOPS, 270.50 MiB/s [2024-12-09T10:13:13.204Z] 71168.00 IOPS, 278.00 MiB/s [2024-12-09T10:13:14.140Z] 73557.33 IOPS, 287.33 MiB/s [2024-12-09T10:13:15.077Z] 74640.00 IOPS, 291.56 MiB/s 00:21:44.280 Latency(us) 00:21:44.280 [2024-12-09T10:13:15.077Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:44.280 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:21:44.280 xnvme_bdev : 5.00 75436.44 294.67 0.00 0.00 844.80 517.59 2427.81 00:21:44.280 [2024-12-09T10:13:15.077Z] =================================================================================================================== 00:21:44.280 [2024-12-09T10:13:15.077Z] Total : 75436.44 294.67 0.00 0.00 844.80 517.59 2427.81 00:21:45.214 10:13:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:21:45.214 10:13:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:21:45.214 10:13:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:21:45.214 10:13:15 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:21:45.214 10:13:15 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:45.214 { 00:21:45.214 "subsystems": [ 00:21:45.214 { 00:21:45.214 "subsystem": "bdev", 00:21:45.214 "config": [ 00:21:45.214 { 00:21:45.214 "params": { 00:21:45.214 "io_mechanism": "io_uring_cmd", 00:21:45.214 "conserve_cpu": false, 00:21:45.214 "filename": "/dev/ng0n1", 00:21:45.214 "name": "xnvme_bdev" 00:21:45.214 }, 00:21:45.214 "method": "bdev_xnvme_create" 00:21:45.214 }, 00:21:45.214 { 00:21:45.214 "method": "bdev_wait_for_examine" 00:21:45.214 } 00:21:45.214 ] 00:21:45.214 } 00:21:45.214 ] 00:21:45.214 } 00:21:45.214 [2024-12-09 10:13:15.948188] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:21:45.214 [2024-12-09 10:13:15.948346] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73138 ] 00:21:45.473 [2024-12-09 10:13:16.119532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:45.473 [2024-12-09 10:13:16.258541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:46.040 Running I/O for 5 seconds... 00:21:47.909 35830.00 IOPS, 139.96 MiB/s [2024-12-09T10:13:19.640Z] 34102.50 IOPS, 133.21 MiB/s [2024-12-09T10:13:21.013Z] 36670.33 IOPS, 143.24 MiB/s [2024-12-09T10:13:21.957Z] 37853.50 IOPS, 147.87 MiB/s [2024-12-09T10:13:21.957Z] 39143.40 IOPS, 152.90 MiB/s 00:21:51.160 Latency(us) 00:21:51.160 [2024-12-09T10:13:21.957Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:51.160 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:21:51.160 xnvme_bdev : 5.00 39124.87 152.83 0.00 0.00 1631.18 82.85 396552.38 00:21:51.160 [2024-12-09T10:13:21.957Z] =================================================================================================================== 00:21:51.160 [2024-12-09T10:13:21.957Z] Total : 39124.87 152.83 0.00 0.00 1631.18 82.85 396552.38 00:21:52.096 00:21:52.096 real 0m27.954s 00:21:52.096 user 0m15.357s 00:21:52.096 sys 0m12.181s 00:21:52.096 ************************************ 00:21:52.096 END TEST xnvme_bdevperf 00:21:52.096 ************************************ 00:21:52.096 10:13:22 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:52.096 10:13:22 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:52.096 10:13:22 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:21:52.096 10:13:22 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:52.096 10:13:22 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:52.096 10:13:22 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:52.355 ************************************ 00:21:52.355 START TEST xnvme_fio_plugin 00:21:52.355 ************************************ 00:21:52.355 10:13:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:21:52.355 10:13:22 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:21:52.355 10:13:22 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:21:52.355 10:13:22 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:21:52.355 10:13:22 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:52.355 10:13:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:52.355 10:13:22 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:21:52.355 10:13:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:52.355 10:13:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:52.355 10:13:22 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:21:52.355 10:13:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:52.355 10:13:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:52.355 10:13:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:21:52.355 10:13:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:21:52.355 10:13:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:52.356 10:13:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:52.356 10:13:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:52.356 10:13:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:21:52.356 10:13:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:52.356 10:13:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:52.356 10:13:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:52.356 10:13:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:21:52.356 10:13:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:52.356 10:13:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:52.356 { 00:21:52.356 "subsystems": [ 00:21:52.356 { 00:21:52.356 "subsystem": "bdev", 00:21:52.356 "config": [ 00:21:52.356 { 00:21:52.356 "params": { 00:21:52.356 "io_mechanism": "io_uring_cmd", 00:21:52.356 "conserve_cpu": false, 00:21:52.356 "filename": "/dev/ng0n1", 00:21:52.356 "name": "xnvme_bdev" 00:21:52.356 }, 00:21:52.356 "method": "bdev_xnvme_create" 00:21:52.356 }, 00:21:52.356 { 00:21:52.356 "method": "bdev_wait_for_examine" 00:21:52.356 } 00:21:52.356 ] 00:21:52.356 } 00:21:52.356 ] 00:21:52.356 } 00:21:52.614 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:21:52.614 fio-3.35 00:21:52.614 Starting 1 thread 00:21:59.176 00:21:59.176 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73262: Mon Dec 9 10:13:28 2024 00:21:59.176 read: IOPS=50.5k, BW=197MiB/s (207MB/s)(987MiB/5002msec) 00:21:59.176 slat (nsec): min=2443, max=65695, avg=3512.38, stdev=2118.52 00:21:59.176 clat (usec): min=135, max=19132, avg=1127.28, stdev=289.55 00:21:59.176 lat (usec): min=146, max=19135, avg=1130.79, stdev=289.72 00:21:59.176 clat percentiles (usec): 00:21:59.176 | 1.00th=[ 881], 5.00th=[ 938], 10.00th=[ 971], 20.00th=[ 1020], 00:21:59.176 | 30.00th=[ 1045], 40.00th=[ 1074], 50.00th=[ 1106], 60.00th=[ 1139], 00:21:59.176 | 70.00th=[ 1172], 80.00th=[ 1205], 90.00th=[ 1270], 95.00th=[ 1336], 00:21:59.176 | 99.00th=[ 1680], 99.50th=[ 1844], 99.90th=[ 4015], 99.95th=[ 5669], 00:21:59.176 | 99.99th=[15008] 00:21:59.176 bw ( KiB/s): min=186741, max=218624, per=100.00%, avg=202116.70, stdev=10423.27, samples=10 00:21:59.176 iops : min=46685, max=54656, avg=50529.10, stdev=2605.88, samples=10 00:21:59.176 lat (usec) : 250=0.01%, 500=0.02%, 750=0.16%, 1000=15.40% 00:21:59.176 lat (msec) : 2=84.07%, 4=0.24%, 10=0.08%, 20=0.03% 00:21:59.176 cpu : usr=33.23%, sys=65.63%, ctx=13, majf=0, minf=762 00:21:59.176 IO depths : 1=1.5%, 2=3.1%, 4=6.1%, 8=12.3%, 16=25.0%, 32=50.4%, >=64=1.6% 00:21:59.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:59.176 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:21:59.176 issued rwts: total=252564,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:59.176 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:59.176 00:21:59.176 Run status group 0 (all jobs): 00:21:59.176 READ: bw=197MiB/s (207MB/s), 197MiB/s-197MiB/s (207MB/s-207MB/s), io=987MiB (1035MB), run=5002-5002msec 00:21:59.744 ----------------------------------------------------- 00:21:59.744 Suppressions used: 00:21:59.744 count bytes template 00:21:59.744 1 11 /usr/src/fio/parse.c 00:21:59.744 1 8 libtcmalloc_minimal.so 00:21:59.744 1 904 libcrypto.so 00:21:59.744 ----------------------------------------------------- 00:21:59.744 00:21:59.744 10:13:30 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:21:59.744 10:13:30 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:59.744 10:13:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:59.744 10:13:30 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:21:59.744 10:13:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:59.744 10:13:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:59.744 10:13:30 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:21:59.744 10:13:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:59.744 10:13:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:59.744 10:13:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:21:59.744 10:13:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:21:59.744 10:13:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:59.744 10:13:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:59.744 10:13:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:21:59.744 10:13:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:59.744 10:13:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:59.744 10:13:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:59.744 10:13:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:59.744 10:13:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:21:59.744 10:13:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:59.744 10:13:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:22:00.003 { 00:22:00.003 "subsystems": [ 00:22:00.003 { 00:22:00.003 "subsystem": "bdev", 00:22:00.003 "config": [ 00:22:00.003 { 00:22:00.003 "params": { 00:22:00.003 "io_mechanism": "io_uring_cmd", 00:22:00.003 "conserve_cpu": false, 00:22:00.003 "filename": "/dev/ng0n1", 00:22:00.003 "name": "xnvme_bdev" 00:22:00.003 }, 00:22:00.003 "method": "bdev_xnvme_create" 00:22:00.003 }, 00:22:00.003 { 00:22:00.003 "method": "bdev_wait_for_examine" 00:22:00.003 } 00:22:00.003 ] 00:22:00.003 } 00:22:00.003 ] 00:22:00.003 } 00:22:00.003 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:22:00.003 fio-3.35 00:22:00.003 Starting 1 thread 00:22:06.573 00:22:06.573 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73353: Mon Dec 9 10:13:36 2024 00:22:06.573 write: IOPS=42.6k, BW=166MiB/s (174MB/s)(833MiB/5004msec); 0 zone resets 00:22:06.573 slat (nsec): min=2607, max=84033, avg=5075.08, stdev=2545.55 00:22:06.573 clat (usec): min=72, max=11499, avg=1310.93, stdev=481.89 00:22:06.573 lat (usec): min=77, max=11505, avg=1316.01, stdev=482.16 00:22:06.573 clat percentiles (usec): 00:22:06.573 | 1.00th=[ 412], 5.00th=[ 1029], 10.00th=[ 1074], 20.00th=[ 1139], 00:22:06.573 | 30.00th=[ 1172], 40.00th=[ 1221], 50.00th=[ 1254], 60.00th=[ 1303], 00:22:06.573 | 70.00th=[ 1336], 80.00th=[ 1385], 90.00th=[ 1500], 95.00th=[ 1680], 00:22:06.573 | 99.00th=[ 3851], 99.50th=[ 4490], 99.90th=[ 5669], 99.95th=[ 9110], 00:22:06.573 | 99.99th=[10945] 00:22:06.573 bw ( KiB/s): min=168448, max=180736, per=100.00%, avg=173842.67, stdev=3883.10, samples=9 00:22:06.573 iops : min=42112, max=45184, avg=43460.67, stdev=970.77, samples=9 00:22:06.573 lat (usec) : 100=0.01%, 250=0.36%, 500=0.96%, 750=0.90%, 1000=1.66% 00:22:06.573 lat (msec) : 2=93.93%, 4=1.33%, 10=0.82%, 20=0.03% 00:22:06.573 cpu : usr=44.07%, sys=54.81%, ctx=16, majf=0, minf=763 00:22:06.573 IO depths : 1=1.5%, 2=3.0%, 4=5.9%, 8=11.8%, 16=23.8%, 32=51.9%, >=64=2.1% 00:22:06.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:06.573 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:22:06.573 issued rwts: total=0,213165,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:06.573 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:06.573 00:22:06.573 Run status group 0 (all jobs): 00:22:06.573 WRITE: bw=166MiB/s (174MB/s), 166MiB/s-166MiB/s (174MB/s-174MB/s), io=833MiB (873MB), run=5004-5004msec 00:22:07.141 ----------------------------------------------------- 00:22:07.141 Suppressions used: 00:22:07.141 count bytes template 00:22:07.141 1 11 /usr/src/fio/parse.c 00:22:07.141 1 8 libtcmalloc_minimal.so 00:22:07.141 1 904 libcrypto.so 00:22:07.141 ----------------------------------------------------- 00:22:07.141 00:22:07.141 00:22:07.141 real 0m15.007s 00:22:07.141 user 0m7.772s 00:22:07.141 sys 0m6.847s 00:22:07.141 10:13:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:07.141 10:13:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:22:07.141 ************************************ 00:22:07.141 END TEST xnvme_fio_plugin 00:22:07.141 ************************************ 00:22:07.400 10:13:37 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:22:07.400 10:13:37 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:22:07.400 10:13:37 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:22:07.400 10:13:37 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:22:07.400 10:13:37 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:07.400 10:13:37 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:07.400 10:13:37 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:07.400 ************************************ 00:22:07.400 START TEST xnvme_rpc 00:22:07.400 ************************************ 00:22:07.400 10:13:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:22:07.400 10:13:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:22:07.400 10:13:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:22:07.400 10:13:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:22:07.400 10:13:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:22:07.400 10:13:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=73444 00:22:07.400 10:13:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 73444 00:22:07.400 10:13:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:07.400 10:13:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 73444 ']' 00:22:07.400 10:13:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:07.400 10:13:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:07.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:07.400 10:13:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:07.400 10:13:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:07.400 10:13:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:07.400 [2024-12-09 10:13:38.105521] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:22:07.400 [2024-12-09 10:13:38.105730] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73444 ] 00:22:07.658 [2024-12-09 10:13:38.293030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:07.658 [2024-12-09 10:13:38.430098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:08.646 10:13:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:08.646 10:13:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:22:08.646 10:13:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:22:08.646 10:13:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.646 10:13:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:08.646 xnvme_bdev 00:22:08.646 10:13:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.646 10:13:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:22:08.646 10:13:39 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:22:08.646 10:13:39 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:22:08.646 10:13:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.646 10:13:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:08.646 10:13:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.646 10:13:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:22:08.646 10:13:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:22:08.646 10:13:39 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:22:08.646 10:13:39 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:22:08.646 10:13:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.646 10:13:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:08.646 10:13:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.646 10:13:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:22:08.646 10:13:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:22:08.646 10:13:39 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:22:08.646 10:13:39 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:22:08.646 10:13:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.646 10:13:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:08.908 10:13:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.908 10:13:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:22:08.908 10:13:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:22:08.908 10:13:39 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:22:08.908 10:13:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.908 10:13:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:08.908 10:13:39 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:22:08.908 10:13:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.908 10:13:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:22:08.908 10:13:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:22:08.908 10:13:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.908 10:13:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:08.908 10:13:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.908 10:13:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 73444 00:22:08.908 10:13:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 73444 ']' 00:22:08.908 10:13:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 73444 00:22:08.908 10:13:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:22:08.908 10:13:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:08.908 10:13:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73444 00:22:08.908 killing process with pid 73444 00:22:08.908 10:13:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:08.908 10:13:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:08.908 10:13:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73444' 00:22:08.908 10:13:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 73444 00:22:08.908 10:13:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 73444 00:22:11.443 ************************************ 00:22:11.443 END TEST xnvme_rpc 00:22:11.443 ************************************ 00:22:11.443 00:22:11.443 real 0m3.774s 00:22:11.443 user 0m3.874s 00:22:11.443 sys 0m0.636s 00:22:11.443 10:13:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:11.443 10:13:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:11.443 10:13:41 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:22:11.443 10:13:41 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:11.443 10:13:41 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:11.443 10:13:41 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:11.443 ************************************ 00:22:11.443 START TEST xnvme_bdevperf 00:22:11.443 ************************************ 00:22:11.443 10:13:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:22:11.443 10:13:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:22:11.443 10:13:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:22:11.443 10:13:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:22:11.443 10:13:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:22:11.443 10:13:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:22:11.443 10:13:41 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:22:11.443 10:13:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:11.443 { 00:22:11.443 "subsystems": [ 00:22:11.443 { 00:22:11.443 "subsystem": "bdev", 00:22:11.443 "config": [ 00:22:11.443 { 00:22:11.443 "params": { 00:22:11.443 "io_mechanism": "io_uring_cmd", 00:22:11.443 "conserve_cpu": true, 00:22:11.443 "filename": "/dev/ng0n1", 00:22:11.443 "name": "xnvme_bdev" 00:22:11.443 }, 00:22:11.443 "method": "bdev_xnvme_create" 00:22:11.443 }, 00:22:11.443 { 00:22:11.443 "method": "bdev_wait_for_examine" 00:22:11.443 } 00:22:11.443 ] 00:22:11.443 } 00:22:11.444 ] 00:22:11.444 } 00:22:11.444 [2024-12-09 10:13:41.914427] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:22:11.444 [2024-12-09 10:13:41.914645] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73524 ] 00:22:11.444 [2024-12-09 10:13:42.100531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:11.444 [2024-12-09 10:13:42.217626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:12.010 Running I/O for 5 seconds... 00:22:13.881 46186.00 IOPS, 180.41 MiB/s [2024-12-09T10:13:45.619Z] 48070.50 IOPS, 187.78 MiB/s [2024-12-09T10:13:46.994Z] 48934.00 IOPS, 191.15 MiB/s [2024-12-09T10:13:47.929Z] 49388.50 IOPS, 192.92 MiB/s 00:22:17.132 Latency(us) 00:22:17.132 [2024-12-09T10:13:47.929Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:17.132 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:22:17.132 xnvme_bdev : 5.00 49864.39 194.78 0.00 0.00 1279.54 640.47 19184.17 00:22:17.132 [2024-12-09T10:13:47.929Z] =================================================================================================================== 00:22:17.132 [2024-12-09T10:13:47.929Z] Total : 49864.39 194.78 0.00 0.00 1279.54 640.47 19184.17 00:22:18.071 10:13:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:22:18.071 10:13:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:22:18.071 10:13:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:22:18.072 10:13:48 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:22:18.072 10:13:48 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:18.072 { 00:22:18.072 "subsystems": [ 00:22:18.072 { 00:22:18.072 "subsystem": "bdev", 00:22:18.072 "config": [ 00:22:18.072 { 00:22:18.072 "params": { 00:22:18.072 "io_mechanism": "io_uring_cmd", 00:22:18.072 "conserve_cpu": true, 00:22:18.072 "filename": "/dev/ng0n1", 00:22:18.072 "name": "xnvme_bdev" 00:22:18.072 }, 00:22:18.072 "method": "bdev_xnvme_create" 00:22:18.072 }, 00:22:18.072 { 00:22:18.072 "method": "bdev_wait_for_examine" 00:22:18.072 } 00:22:18.072 ] 00:22:18.072 } 00:22:18.072 ] 00:22:18.072 } 00:22:18.072 [2024-12-09 10:13:48.762365] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:22:18.072 [2024-12-09 10:13:48.762854] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73598 ] 00:22:18.331 [2024-12-09 10:13:48.946880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:18.331 [2024-12-09 10:13:49.084006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:18.899 Running I/O for 5 seconds... 00:22:20.770 42293.00 IOPS, 165.21 MiB/s [2024-12-09T10:13:52.503Z] 43839.50 IOPS, 171.25 MiB/s [2024-12-09T10:13:53.885Z] 43583.67 IOPS, 170.25 MiB/s [2024-12-09T10:13:54.820Z] 43247.75 IOPS, 168.94 MiB/s [2024-12-09T10:13:54.820Z] 43276.60 IOPS, 169.05 MiB/s 00:22:24.023 Latency(us) 00:22:24.023 [2024-12-09T10:13:54.820Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:24.023 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:22:24.023 xnvme_bdev : 5.00 43251.75 168.95 0.00 0.00 1474.37 707.49 6464.23 00:22:24.023 [2024-12-09T10:13:54.820Z] =================================================================================================================== 00:22:24.023 [2024-12-09T10:13:54.820Z] Total : 43251.75 168.95 0.00 0.00 1474.37 707.49 6464.23 00:22:24.969 10:13:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:22:24.969 10:13:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:22:24.969 10:13:55 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:22:24.969 10:13:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:22:24.969 10:13:55 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:24.969 { 00:22:24.969 "subsystems": [ 00:22:24.969 { 00:22:24.969 "subsystem": "bdev", 00:22:24.969 "config": [ 00:22:24.969 { 00:22:24.969 "params": { 00:22:24.969 "io_mechanism": "io_uring_cmd", 00:22:24.969 "conserve_cpu": true, 00:22:24.969 "filename": "/dev/ng0n1", 00:22:24.969 "name": "xnvme_bdev" 00:22:24.969 }, 00:22:24.969 "method": "bdev_xnvme_create" 00:22:24.969 }, 00:22:24.969 { 00:22:24.969 "method": "bdev_wait_for_examine" 00:22:24.969 } 00:22:24.969 ] 00:22:24.969 } 00:22:24.969 ] 00:22:24.969 } 00:22:24.969 [2024-12-09 10:13:55.700943] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:22:24.969 [2024-12-09 10:13:55.701095] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73684 ] 00:22:25.231 [2024-12-09 10:13:55.870643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:25.231 [2024-12-09 10:13:56.004215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:25.798 Running I/O for 5 seconds... 00:22:27.672 77888.00 IOPS, 304.25 MiB/s [2024-12-09T10:13:59.433Z] 78688.00 IOPS, 307.38 MiB/s [2024-12-09T10:14:00.378Z] 79168.00 IOPS, 309.25 MiB/s [2024-12-09T10:14:01.757Z] 79056.00 IOPS, 308.81 MiB/s 00:22:30.960 Latency(us) 00:22:30.960 [2024-12-09T10:14:01.757Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:30.960 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:22:30.960 xnvme_bdev : 5.00 79093.69 308.96 0.00 0.00 805.65 458.01 2785.28 00:22:30.960 [2024-12-09T10:14:01.757Z] =================================================================================================================== 00:22:30.960 [2024-12-09T10:14:01.757Z] Total : 79093.69 308.96 0.00 0.00 805.65 458.01 2785.28 00:22:31.896 10:14:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:22:31.896 10:14:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:22:31.896 10:14:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:22:31.896 10:14:02 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:22:31.896 10:14:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:31.896 { 00:22:31.896 "subsystems": [ 00:22:31.896 { 00:22:31.896 "subsystem": "bdev", 00:22:31.896 "config": [ 00:22:31.896 { 00:22:31.896 "params": { 00:22:31.896 "io_mechanism": "io_uring_cmd", 00:22:31.896 "conserve_cpu": true, 00:22:31.896 "filename": "/dev/ng0n1", 00:22:31.896 "name": "xnvme_bdev" 00:22:31.896 }, 00:22:31.896 "method": "bdev_xnvme_create" 00:22:31.896 }, 00:22:31.896 { 00:22:31.896 "method": "bdev_wait_for_examine" 00:22:31.896 } 00:22:31.896 ] 00:22:31.896 } 00:22:31.896 ] 00:22:31.896 } 00:22:31.896 [2024-12-09 10:14:02.534576] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:22:31.896 [2024-12-09 10:14:02.534788] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73760 ] 00:22:32.156 [2024-12-09 10:14:02.719946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.156 [2024-12-09 10:14:02.853206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:32.414 Running I/O for 5 seconds... 00:22:34.721 39781.00 IOPS, 155.39 MiB/s [2024-12-09T10:14:06.452Z] 39903.50 IOPS, 155.87 MiB/s [2024-12-09T10:14:07.386Z] 40081.67 IOPS, 156.57 MiB/s [2024-12-09T10:14:08.381Z] 39873.75 IOPS, 155.76 MiB/s [2024-12-09T10:14:08.381Z] 39781.20 IOPS, 155.40 MiB/s 00:22:37.584 Latency(us) 00:22:37.584 [2024-12-09T10:14:08.381Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:37.584 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:22:37.584 xnvme_bdev : 5.00 39747.26 155.26 0.00 0.00 1601.04 161.98 11439.01 00:22:37.584 [2024-12-09T10:14:08.381Z] =================================================================================================================== 00:22:37.584 [2024-12-09T10:14:08.381Z] Total : 39747.26 155.26 0.00 0.00 1601.04 161.98 11439.01 00:22:38.960 ************************************ 00:22:38.960 END TEST xnvme_bdevperf 00:22:38.960 ************************************ 00:22:38.960 00:22:38.960 real 0m27.619s 00:22:38.960 user 0m16.703s 00:22:38.960 sys 0m8.620s 00:22:38.960 10:14:09 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:38.960 10:14:09 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:38.960 10:14:09 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:22:38.960 10:14:09 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:38.960 10:14:09 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:38.960 10:14:09 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:38.960 ************************************ 00:22:38.960 START TEST xnvme_fio_plugin 00:22:38.960 ************************************ 00:22:38.960 10:14:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:22:38.960 10:14:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:22:38.960 10:14:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:22:38.960 10:14:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:22:38.960 10:14:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:22:38.960 10:14:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:22:38.960 10:14:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:38.960 10:14:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:22:38.960 10:14:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:38.960 10:14:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:38.960 10:14:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:38.960 10:14:09 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:22:38.960 10:14:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:22:38.960 10:14:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:38.961 10:14:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:22:38.961 10:14:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:38.961 10:14:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:38.961 10:14:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:22:38.961 10:14:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:38.961 10:14:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:38.961 10:14:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:38.961 10:14:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:22:38.961 10:14:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:38.961 10:14:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:22:38.961 { 00:22:38.961 "subsystems": [ 00:22:38.961 { 00:22:38.961 "subsystem": "bdev", 00:22:38.961 "config": [ 00:22:38.961 { 00:22:38.961 "params": { 00:22:38.961 "io_mechanism": "io_uring_cmd", 00:22:38.961 "conserve_cpu": true, 00:22:38.961 "filename": "/dev/ng0n1", 00:22:38.961 "name": "xnvme_bdev" 00:22:38.961 }, 00:22:38.961 "method": "bdev_xnvme_create" 00:22:38.961 }, 00:22:38.961 { 00:22:38.961 "method": "bdev_wait_for_examine" 00:22:38.961 } 00:22:38.961 ] 00:22:38.961 } 00:22:38.961 ] 00:22:38.961 } 00:22:38.961 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:22:38.961 fio-3.35 00:22:38.961 Starting 1 thread 00:22:45.527 00:22:45.527 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73884: Mon Dec 9 10:14:15 2024 00:22:45.527 read: IOPS=51.6k, BW=201MiB/s (211MB/s)(1007MiB/5001msec) 00:22:45.527 slat (nsec): min=2580, max=69511, avg=3629.42, stdev=2078.89 00:22:45.527 clat (usec): min=755, max=2615, avg=1093.08, stdev=124.86 00:22:45.527 lat (usec): min=758, max=2642, avg=1096.71, stdev=125.46 00:22:45.527 clat percentiles (usec): 00:22:45.527 | 1.00th=[ 881], 5.00th=[ 930], 10.00th=[ 955], 20.00th=[ 996], 00:22:45.527 | 30.00th=[ 1020], 40.00th=[ 1057], 50.00th=[ 1074], 60.00th=[ 1106], 00:22:45.527 | 70.00th=[ 1139], 80.00th=[ 1172], 90.00th=[ 1254], 95.00th=[ 1303], 00:22:45.527 | 99.00th=[ 1467], 99.50th=[ 1647], 99.90th=[ 1860], 99.95th=[ 1958], 00:22:45.527 | 99.99th=[ 2409] 00:22:45.527 bw ( KiB/s): min=187392, max=219136, per=100.00%, avg=207928.89, stdev=10212.59, samples=9 00:22:45.527 iops : min=46848, max=54784, avg=51982.22, stdev=2553.15, samples=9 00:22:45.527 lat (usec) : 1000=22.28% 00:22:45.527 lat (msec) : 2=77.67%, 4=0.05% 00:22:45.527 cpu : usr=49.28%, sys=47.18%, ctx=13, majf=0, minf=762 00:22:45.527 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:22:45.527 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:45.527 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:22:45.527 issued rwts: total=257856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:45.527 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:45.527 00:22:45.527 Run status group 0 (all jobs): 00:22:45.527 READ: bw=201MiB/s (211MB/s), 201MiB/s-201MiB/s (211MB/s-211MB/s), io=1007MiB (1056MB), run=5001-5001msec 00:22:46.461 ----------------------------------------------------- 00:22:46.461 Suppressions used: 00:22:46.461 count bytes template 00:22:46.461 1 11 /usr/src/fio/parse.c 00:22:46.461 1 8 libtcmalloc_minimal.so 00:22:46.461 1 904 libcrypto.so 00:22:46.461 ----------------------------------------------------- 00:22:46.461 00:22:46.461 10:14:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:22:46.461 10:14:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:22:46.461 10:14:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:22:46.461 10:14:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:22:46.461 10:14:16 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:22:46.461 10:14:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:46.461 10:14:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:22:46.461 10:14:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:46.461 10:14:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:46.461 10:14:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:46.461 10:14:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:22:46.461 10:14:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:46.461 10:14:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:46.461 10:14:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:46.461 10:14:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:22:46.461 10:14:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:46.461 10:14:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:46.461 10:14:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:46.461 10:14:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:22:46.462 10:14:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:46.462 10:14:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:22:46.462 { 00:22:46.462 "subsystems": [ 00:22:46.462 { 00:22:46.462 "subsystem": "bdev", 00:22:46.462 "config": [ 00:22:46.462 { 00:22:46.462 "params": { 00:22:46.462 "io_mechanism": "io_uring_cmd", 00:22:46.462 "conserve_cpu": true, 00:22:46.462 "filename": "/dev/ng0n1", 00:22:46.462 "name": "xnvme_bdev" 00:22:46.462 }, 00:22:46.462 "method": "bdev_xnvme_create" 00:22:46.462 }, 00:22:46.462 { 00:22:46.462 "method": "bdev_wait_for_examine" 00:22:46.462 } 00:22:46.462 ] 00:22:46.462 } 00:22:46.462 ] 00:22:46.462 } 00:22:46.462 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:22:46.462 fio-3.35 00:22:46.462 Starting 1 thread 00:22:53.028 00:22:53.028 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73978: Mon Dec 9 10:14:23 2024 00:22:53.028 write: IOPS=42.4k, BW=165MiB/s (173MB/s)(827MiB/5001msec); 0 zone resets 00:22:53.028 slat (usec): min=2, max=296, avg= 5.26, stdev= 3.07 00:22:53.028 clat (usec): min=510, max=3071, avg=1302.07, stdev=178.66 00:22:53.028 lat (usec): min=515, max=3118, avg=1307.32, stdev=179.52 00:22:53.028 clat percentiles (usec): 00:22:53.028 | 1.00th=[ 1020], 5.00th=[ 1074], 10.00th=[ 1106], 20.00th=[ 1156], 00:22:53.028 | 30.00th=[ 1205], 40.00th=[ 1237], 50.00th=[ 1287], 60.00th=[ 1319], 00:22:53.028 | 70.00th=[ 1352], 80.00th=[ 1401], 90.00th=[ 1516], 95.00th=[ 1663], 00:22:53.028 | 99.00th=[ 1876], 99.50th=[ 1942], 99.90th=[ 2311], 99.95th=[ 2540], 00:22:53.028 | 99.99th=[ 2933] 00:22:53.028 bw ( KiB/s): min=166912, max=174080, per=100.00%, avg=169585.78, stdev=2452.50, samples=9 00:22:53.028 iops : min=41728, max=43520, avg=42396.44, stdev=613.12, samples=9 00:22:53.028 lat (usec) : 750=0.01%, 1000=0.39% 00:22:53.028 lat (msec) : 2=99.26%, 4=0.35% 00:22:53.028 cpu : usr=60.96%, sys=35.60%, ctx=66, majf=0, minf=763 00:22:53.028 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=24.9%, 32=50.2%, >=64=1.6% 00:22:53.028 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:53.028 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:22:53.028 issued rwts: total=0,211832,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:53.028 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:53.028 00:22:53.028 Run status group 0 (all jobs): 00:22:53.028 WRITE: bw=165MiB/s (173MB/s), 165MiB/s-165MiB/s (173MB/s-173MB/s), io=827MiB (868MB), run=5001-5001msec 00:22:53.975 ----------------------------------------------------- 00:22:53.976 Suppressions used: 00:22:53.976 count bytes template 00:22:53.976 1 11 /usr/src/fio/parse.c 00:22:53.976 1 8 libtcmalloc_minimal.so 00:22:53.976 1 904 libcrypto.so 00:22:53.976 ----------------------------------------------------- 00:22:53.976 00:22:53.976 ************************************ 00:22:53.976 END TEST xnvme_fio_plugin 00:22:53.976 ************************************ 00:22:53.976 00:22:53.976 real 0m14.969s 00:22:53.976 user 0m9.375s 00:22:53.976 sys 0m4.974s 00:22:53.976 10:14:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:53.976 10:14:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:22:53.976 Process with pid 73444 is not found 00:22:53.976 10:14:24 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 73444 00:22:53.976 10:14:24 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 73444 ']' 00:22:53.976 10:14:24 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 73444 00:22:53.976 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (73444) - No such process 00:22:53.976 10:14:24 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 73444 is not found' 00:22:53.976 00:22:53.976 real 3m54.656s 00:22:53.976 user 2m11.587s 00:22:53.976 sys 1m26.825s 00:22:53.976 10:14:24 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:53.976 ************************************ 00:22:53.976 END TEST nvme_xnvme 00:22:53.976 ************************************ 00:22:53.976 10:14:24 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:53.976 10:14:24 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:22:53.976 10:14:24 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:53.976 10:14:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:53.976 10:14:24 -- common/autotest_common.sh@10 -- # set +x 00:22:53.976 ************************************ 00:22:53.976 START TEST blockdev_xnvme 00:22:53.976 ************************************ 00:22:53.976 10:14:24 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:22:53.976 * Looking for test storage... 00:22:53.976 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:22:53.976 10:14:24 blockdev_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:53.976 10:14:24 blockdev_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:53.976 10:14:24 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:22:53.976 10:14:24 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:53.976 10:14:24 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:53.976 10:14:24 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:53.976 10:14:24 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:53.976 10:14:24 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:22:53.976 10:14:24 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:22:53.976 10:14:24 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:22:53.976 10:14:24 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:22:53.976 10:14:24 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:22:53.976 10:14:24 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:22:53.976 10:14:24 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:22:53.976 10:14:24 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:53.976 10:14:24 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:22:53.976 10:14:24 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:22:53.976 10:14:24 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:53.976 10:14:24 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:53.976 10:14:24 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:22:53.976 10:14:24 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:22:53.976 10:14:24 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:53.976 10:14:24 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:22:53.976 10:14:24 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:22:53.976 10:14:24 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:22:53.976 10:14:24 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:22:53.976 10:14:24 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:53.976 10:14:24 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:22:53.976 10:14:24 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:22:53.976 10:14:24 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:53.976 10:14:24 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:53.976 10:14:24 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:22:53.976 10:14:24 blockdev_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:53.976 10:14:24 blockdev_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:53.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:53.976 --rc genhtml_branch_coverage=1 00:22:53.976 --rc genhtml_function_coverage=1 00:22:53.976 --rc genhtml_legend=1 00:22:53.976 --rc geninfo_all_blocks=1 00:22:53.976 --rc geninfo_unexecuted_blocks=1 00:22:53.976 00:22:53.976 ' 00:22:53.976 10:14:24 blockdev_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:53.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:53.976 --rc genhtml_branch_coverage=1 00:22:53.976 --rc genhtml_function_coverage=1 00:22:53.976 --rc genhtml_legend=1 00:22:53.976 --rc geninfo_all_blocks=1 00:22:53.976 --rc geninfo_unexecuted_blocks=1 00:22:53.976 00:22:53.976 ' 00:22:53.976 10:14:24 blockdev_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:53.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:53.976 --rc genhtml_branch_coverage=1 00:22:53.976 --rc genhtml_function_coverage=1 00:22:53.976 --rc genhtml_legend=1 00:22:53.976 --rc geninfo_all_blocks=1 00:22:53.976 --rc geninfo_unexecuted_blocks=1 00:22:53.976 00:22:53.976 ' 00:22:53.976 10:14:24 blockdev_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:53.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:53.976 --rc genhtml_branch_coverage=1 00:22:53.976 --rc genhtml_function_coverage=1 00:22:53.976 --rc genhtml_legend=1 00:22:53.976 --rc geninfo_all_blocks=1 00:22:53.976 --rc geninfo_unexecuted_blocks=1 00:22:53.976 00:22:53.976 ' 00:22:53.976 10:14:24 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:22:53.976 10:14:24 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:22:53.976 10:14:24 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:22:53.976 10:14:24 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:22:53.976 10:14:24 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:22:53.976 10:14:24 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:22:53.976 10:14:24 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:22:53.976 10:14:24 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:22:53.976 10:14:24 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:22:53.976 10:14:24 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:22:53.976 10:14:24 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:22:53.976 10:14:24 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:22:53.976 10:14:24 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:22:53.976 10:14:24 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:22:53.976 10:14:24 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:22:53.976 10:14:24 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:22:53.976 10:14:24 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:22:53.976 10:14:24 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:22:53.976 10:14:24 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:22:53.976 10:14:24 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:22:53.976 10:14:24 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:22:53.976 10:14:24 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:22:53.976 10:14:24 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:22:53.976 10:14:24 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:22:53.976 10:14:24 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=74112 00:22:53.976 10:14:24 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:22:53.976 10:14:24 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 74112 00:22:53.976 10:14:24 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 74112 ']' 00:22:53.976 10:14:24 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:53.976 10:14:24 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:53.976 10:14:24 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:22:53.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:53.976 10:14:24 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:53.976 10:14:24 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:53.976 10:14:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:54.235 [2024-12-09 10:14:24.877963] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:22:54.235 [2024-12-09 10:14:24.878199] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74112 ] 00:22:54.493 [2024-12-09 10:14:25.071552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.493 [2024-12-09 10:14:25.231943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:55.428 10:14:26 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:55.428 10:14:26 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:22:55.428 10:14:26 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:22:55.428 10:14:26 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:22:55.428 10:14:26 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:22:55.428 10:14:26 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:22:55.428 10:14:26 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:22:55.995 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:56.563 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:22:56.563 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:22:56.563 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:22:56.563 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:22:56.563 10:14:27 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:22:56.563 10:14:27 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:22:56.563 10:14:27 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:22:56.563 10:14:27 blockdev_xnvme -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:22:56.563 10:14:27 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:22:56.563 10:14:27 blockdev_xnvme -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:22:56.563 10:14:27 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:22:56.563 10:14:27 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:22:56.563 10:14:27 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:22:56.563 10:14:27 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:22:56.563 10:14:27 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:22:56.563 10:14:27 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:56.563 10:14:27 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:56.563 10:14:27 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:22:56.563 10:14:27 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n2 00:22:56.563 10:14:27 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:22:56.563 10:14:27 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:22:56.563 10:14:27 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:56.563 10:14:27 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:22:56.563 10:14:27 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n3 00:22:56.563 10:14:27 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:22:56.563 10:14:27 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:22:56.563 10:14:27 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:56.563 10:14:27 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:22:56.563 10:14:27 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:22:56.563 10:14:27 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:22:56.563 10:14:27 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1c1n1 00:22:56.563 10:14:27 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1c1n1 00:22:56.563 10:14:27 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:22:56.563 10:14:27 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:56.563 10:14:27 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:22:56.563 10:14:27 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:22:56.563 10:14:27 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:22:56.563 10:14:27 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:22:56.563 10:14:27 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:22:56.563 10:14:27 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:22:56.563 10:14:27 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:56.563 10:14:27 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:22:56.563 10:14:27 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:22:56.563 10:14:27 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:22:56.563 10:14:27 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3n1 00:22:56.563 10:14:27 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:22:56.563 10:14:27 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:22:56.563 10:14:27 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:56.563 10:14:27 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:22:56.563 10:14:27 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:22:56.563 10:14:27 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:22:56.563 10:14:27 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:22:56.563 10:14:27 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:22:56.563 10:14:27 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:22:56.563 10:14:27 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:22:56.563 10:14:27 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:22:56.563 10:14:27 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:22:56.563 10:14:27 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:22:56.563 10:14:27 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:22:56.563 10:14:27 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:22:56.563 10:14:27 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:22:56.563 10:14:27 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:22:56.563 10:14:27 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:22:56.563 10:14:27 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:22:56.563 10:14:27 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:22:56.563 10:14:27 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:22:56.563 10:14:27 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:22:56.563 10:14:27 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:22:56.563 10:14:27 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:22:56.563 10:14:27 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:22:56.563 10:14:27 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:22:56.563 10:14:27 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:22:56.563 10:14:27 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:22:56.563 10:14:27 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:22:56.563 10:14:27 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.563 10:14:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:56.563 10:14:27 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:22:56.563 nvme0n1 00:22:56.563 nvme0n2 00:22:56.563 nvme0n3 00:22:56.563 nvme1n1 00:22:56.563 nvme2n1 00:22:56.563 nvme3n1 00:22:56.563 10:14:27 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.563 10:14:27 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:22:56.563 10:14:27 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.563 10:14:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:56.563 10:14:27 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.563 10:14:27 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:22:56.563 10:14:27 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:22:56.563 10:14:27 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.563 10:14:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:56.563 10:14:27 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.563 10:14:27 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:22:56.563 10:14:27 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.563 10:14:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:56.563 10:14:27 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.563 10:14:27 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:22:56.563 10:14:27 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.563 10:14:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:56.563 10:14:27 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.563 10:14:27 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:22:56.563 10:14:27 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:22:56.563 10:14:27 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.563 10:14:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:56.563 10:14:27 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:22:56.563 10:14:27 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.822 10:14:27 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:22:56.823 10:14:27 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "6dbc27ea-18ce-4c07-b0db-f6c8f4d786be"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6dbc27ea-18ce-4c07-b0db-f6c8f4d786be",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "84a97370-4f72-4dfb-8619-c25c3f4f947d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "84a97370-4f72-4dfb-8619-c25c3f4f947d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "b7b68f86-ea12-4145-980e-cff906749b6f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b7b68f86-ea12-4145-980e-cff906749b6f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "7145743d-73cc-4889-b6b0-d91288d7ecb7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "7145743d-73cc-4889-b6b0-d91288d7ecb7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "94af174a-2a4b-44f7-a2a0-d32b1510eb6a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "94af174a-2a4b-44f7-a2a0-d32b1510eb6a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "73256b0e-503b-4ef1-862d-a69cbac83422"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "73256b0e-503b-4ef1-862d-a69cbac83422",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:22:56.823 10:14:27 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:22:56.823 10:14:27 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:22:56.823 10:14:27 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:22:56.823 10:14:27 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:22:56.823 10:14:27 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 74112 00:22:56.823 10:14:27 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 74112 ']' 00:22:56.823 10:14:27 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 74112 00:22:56.823 10:14:27 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:22:56.823 10:14:27 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:56.823 10:14:27 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74112 00:22:56.823 killing process with pid 74112 00:22:56.823 10:14:27 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:56.823 10:14:27 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:56.823 10:14:27 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74112' 00:22:56.823 10:14:27 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 74112 00:22:56.823 10:14:27 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 74112 00:22:59.383 10:14:29 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:59.383 10:14:29 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:22:59.383 10:14:29 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:22:59.383 10:14:29 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:59.383 10:14:29 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:59.383 ************************************ 00:22:59.383 START TEST bdev_hello_world 00:22:59.383 ************************************ 00:22:59.383 10:14:29 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:22:59.383 [2024-12-09 10:14:29.667501] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:22:59.383 [2024-12-09 10:14:29.667670] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74412 ] 00:22:59.383 [2024-12-09 10:14:29.843626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.383 [2024-12-09 10:14:29.993303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:59.950 [2024-12-09 10:14:30.450968] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:22:59.950 [2024-12-09 10:14:30.451040] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:22:59.950 [2024-12-09 10:14:30.451063] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:22:59.950 [2024-12-09 10:14:30.453429] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:22:59.950 [2024-12-09 10:14:30.453906] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:22:59.950 [2024-12-09 10:14:30.453944] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:22:59.950 [2024-12-09 10:14:30.454196] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:22:59.950 00:22:59.950 [2024-12-09 10:14:30.454239] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:23:00.885 00:23:00.885 real 0m1.938s 00:23:00.885 user 0m1.523s 00:23:00.885 sys 0m0.297s 00:23:00.885 10:14:31 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:00.885 10:14:31 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:23:00.885 ************************************ 00:23:00.885 END TEST bdev_hello_world 00:23:00.885 ************************************ 00:23:00.885 10:14:31 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:23:00.885 10:14:31 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:00.885 10:14:31 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:00.885 10:14:31 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:00.885 ************************************ 00:23:00.886 START TEST bdev_bounds 00:23:00.886 ************************************ 00:23:00.886 10:14:31 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:23:00.886 10:14:31 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=74453 00:23:00.886 10:14:31 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:23:00.886 10:14:31 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:23:00.886 Process bdevio pid: 74453 00:23:00.886 10:14:31 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 74453' 00:23:00.886 10:14:31 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 74453 00:23:00.886 10:14:31 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 74453 ']' 00:23:00.886 10:14:31 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:00.886 10:14:31 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:00.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:00.886 10:14:31 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:00.886 10:14:31 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:00.886 10:14:31 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:23:01.144 [2024-12-09 10:14:31.715635] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:23:01.144 [2024-12-09 10:14:31.715863] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74453 ] 00:23:01.144 [2024-12-09 10:14:31.903252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:01.403 [2024-12-09 10:14:32.050976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:01.403 [2024-12-09 10:14:32.051129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:01.403 [2024-12-09 10:14:32.051136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:01.969 10:14:32 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:01.969 10:14:32 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:23:01.969 10:14:32 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:23:02.228 I/O targets: 00:23:02.228 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:23:02.228 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:23:02.228 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:23:02.228 nvme1n1: 262144 blocks of 4096 bytes (1024 MiB) 00:23:02.228 nvme2n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:23:02.228 nvme3n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:23:02.228 00:23:02.228 00:23:02.228 CUnit - A unit testing framework for C - Version 2.1-3 00:23:02.228 http://cunit.sourceforge.net/ 00:23:02.228 00:23:02.228 00:23:02.228 Suite: bdevio tests on: nvme3n1 00:23:02.228 Test: blockdev write read block ...passed 00:23:02.228 Test: blockdev write zeroes read block ...passed 00:23:02.228 Test: blockdev write zeroes read no split ...passed 00:23:02.228 Test: blockdev write zeroes read split ...passed 00:23:02.228 Test: blockdev write zeroes read split partial ...passed 00:23:02.228 Test: blockdev reset ...passed 00:23:02.228 Test: blockdev write read 8 blocks ...passed 00:23:02.228 Test: blockdev write read size > 128k ...passed 00:23:02.228 Test: blockdev write read invalid size ...passed 00:23:02.228 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:02.228 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:02.228 Test: blockdev write read max offset ...passed 00:23:02.228 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:02.228 Test: blockdev writev readv 8 blocks ...passed 00:23:02.228 Test: blockdev writev readv 30 x 1block ...passed 00:23:02.228 Test: blockdev writev readv block ...passed 00:23:02.228 Test: blockdev writev readv size > 128k ...passed 00:23:02.228 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:02.228 Test: blockdev comparev and writev ...passed 00:23:02.228 Test: blockdev nvme passthru rw ...passed 00:23:02.228 Test: blockdev nvme passthru vendor specific ...passed 00:23:02.228 Test: blockdev nvme admin passthru ...passed 00:23:02.228 Test: blockdev copy ...passed 00:23:02.228 Suite: bdevio tests on: nvme2n1 00:23:02.228 Test: blockdev write read block ...passed 00:23:02.228 Test: blockdev write zeroes read block ...passed 00:23:02.228 Test: blockdev write zeroes read no split ...passed 00:23:02.228 Test: blockdev write zeroes read split ...passed 00:23:02.228 Test: blockdev write zeroes read split partial ...passed 00:23:02.228 Test: blockdev reset ...passed 00:23:02.228 Test: blockdev write read 8 blocks ...passed 00:23:02.228 Test: blockdev write read size > 128k ...passed 00:23:02.228 Test: blockdev write read invalid size ...passed 00:23:02.228 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:02.228 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:02.228 Test: blockdev write read max offset ...passed 00:23:02.228 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:02.228 Test: blockdev writev readv 8 blocks ...passed 00:23:02.228 Test: blockdev writev readv 30 x 1block ...passed 00:23:02.228 Test: blockdev writev readv block ...passed 00:23:02.228 Test: blockdev writev readv size > 128k ...passed 00:23:02.228 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:02.228 Test: blockdev comparev and writev ...passed 00:23:02.228 Test: blockdev nvme passthru rw ...passed 00:23:02.228 Test: blockdev nvme passthru vendor specific ...passed 00:23:02.228 Test: blockdev nvme admin passthru ...passed 00:23:02.228 Test: blockdev copy ...passed 00:23:02.228 Suite: bdevio tests on: nvme1n1 00:23:02.228 Test: blockdev write read block ...passed 00:23:02.228 Test: blockdev write zeroes read block ...passed 00:23:02.228 Test: blockdev write zeroes read no split ...passed 00:23:02.228 Test: blockdev write zeroes read split ...passed 00:23:02.487 Test: blockdev write zeroes read split partial ...passed 00:23:02.487 Test: blockdev reset ...passed 00:23:02.487 Test: blockdev write read 8 blocks ...passed 00:23:02.487 Test: blockdev write read size > 128k ...passed 00:23:02.487 Test: blockdev write read invalid size ...passed 00:23:02.487 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:02.487 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:02.487 Test: blockdev write read max offset ...passed 00:23:02.487 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:02.487 Test: blockdev writev readv 8 blocks ...passed 00:23:02.487 Test: blockdev writev readv 30 x 1block ...passed 00:23:02.487 Test: blockdev writev readv block ...passed 00:23:02.487 Test: blockdev writev readv size > 128k ...passed 00:23:02.487 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:02.487 Test: blockdev comparev and writev ...passed 00:23:02.487 Test: blockdev nvme passthru rw ...passed 00:23:02.487 Test: blockdev nvme passthru vendor specific ...passed 00:23:02.487 Test: blockdev nvme admin passthru ...passed 00:23:02.487 Test: blockdev copy ...passed 00:23:02.487 Suite: bdevio tests on: nvme0n3 00:23:02.487 Test: blockdev write read block ...passed 00:23:02.487 Test: blockdev write zeroes read block ...passed 00:23:02.487 Test: blockdev write zeroes read no split ...passed 00:23:02.487 Test: blockdev write zeroes read split ...passed 00:23:02.487 Test: blockdev write zeroes read split partial ...passed 00:23:02.487 Test: blockdev reset ...passed 00:23:02.487 Test: blockdev write read 8 blocks ...passed 00:23:02.487 Test: blockdev write read size > 128k ...passed 00:23:02.487 Test: blockdev write read invalid size ...passed 00:23:02.487 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:02.487 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:02.487 Test: blockdev write read max offset ...passed 00:23:02.487 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:02.487 Test: blockdev writev readv 8 blocks ...passed 00:23:02.487 Test: blockdev writev readv 30 x 1block ...passed 00:23:02.487 Test: blockdev writev readv block ...passed 00:23:02.487 Test: blockdev writev readv size > 128k ...passed 00:23:02.487 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:02.487 Test: blockdev comparev and writev ...passed 00:23:02.487 Test: blockdev nvme passthru rw ...passed 00:23:02.487 Test: blockdev nvme passthru vendor specific ...passed 00:23:02.487 Test: blockdev nvme admin passthru ...passed 00:23:02.487 Test: blockdev copy ...passed 00:23:02.487 Suite: bdevio tests on: nvme0n2 00:23:02.487 Test: blockdev write read block ...passed 00:23:02.487 Test: blockdev write zeroes read block ...passed 00:23:02.487 Test: blockdev write zeroes read no split ...passed 00:23:02.487 Test: blockdev write zeroes read split ...passed 00:23:02.487 Test: blockdev write zeroes read split partial ...passed 00:23:02.487 Test: blockdev reset ...passed 00:23:02.487 Test: blockdev write read 8 blocks ...passed 00:23:02.487 Test: blockdev write read size > 128k ...passed 00:23:02.487 Test: blockdev write read invalid size ...passed 00:23:02.487 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:02.487 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:02.487 Test: blockdev write read max offset ...passed 00:23:02.487 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:02.487 Test: blockdev writev readv 8 blocks ...passed 00:23:02.487 Test: blockdev writev readv 30 x 1block ...passed 00:23:02.487 Test: blockdev writev readv block ...passed 00:23:02.487 Test: blockdev writev readv size > 128k ...passed 00:23:02.487 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:02.487 Test: blockdev comparev and writev ...passed 00:23:02.487 Test: blockdev nvme passthru rw ...passed 00:23:02.487 Test: blockdev nvme passthru vendor specific ...passed 00:23:02.487 Test: blockdev nvme admin passthru ...passed 00:23:02.487 Test: blockdev copy ...passed 00:23:02.487 Suite: bdevio tests on: nvme0n1 00:23:02.487 Test: blockdev write read block ...passed 00:23:02.487 Test: blockdev write zeroes read block ...passed 00:23:02.487 Test: blockdev write zeroes read no split ...passed 00:23:02.487 Test: blockdev write zeroes read split ...passed 00:23:02.487 Test: blockdev write zeroes read split partial ...passed 00:23:02.487 Test: blockdev reset ...passed 00:23:02.487 Test: blockdev write read 8 blocks ...passed 00:23:02.487 Test: blockdev write read size > 128k ...passed 00:23:02.487 Test: blockdev write read invalid size ...passed 00:23:02.487 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:02.487 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:02.487 Test: blockdev write read max offset ...passed 00:23:02.487 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:02.487 Test: blockdev writev readv 8 blocks ...passed 00:23:02.487 Test: blockdev writev readv 30 x 1block ...passed 00:23:02.487 Test: blockdev writev readv block ...passed 00:23:02.487 Test: blockdev writev readv size > 128k ...passed 00:23:02.487 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:02.487 Test: blockdev comparev and writev ...passed 00:23:02.487 Test: blockdev nvme passthru rw ...passed 00:23:02.487 Test: blockdev nvme passthru vendor specific ...passed 00:23:02.487 Test: blockdev nvme admin passthru ...passed 00:23:02.487 Test: blockdev copy ...passed 00:23:02.487 00:23:02.487 Run Summary: Type Total Ran Passed Failed Inactive 00:23:02.487 suites 6 6 n/a 0 0 00:23:02.487 tests 138 138 138 0 0 00:23:02.487 asserts 780 780 780 0 n/a 00:23:02.487 00:23:02.487 Elapsed time = 1.251 seconds 00:23:02.487 0 00:23:02.487 10:14:33 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 74453 00:23:02.487 10:14:33 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 74453 ']' 00:23:02.487 10:14:33 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 74453 00:23:02.487 10:14:33 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:23:02.746 10:14:33 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:02.746 10:14:33 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74453 00:23:02.746 10:14:33 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:02.746 10:14:33 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:02.746 killing process with pid 74453 00:23:02.746 10:14:33 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74453' 00:23:02.746 10:14:33 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 74453 00:23:02.746 10:14:33 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 74453 00:23:03.681 10:14:34 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:23:03.681 00:23:03.681 real 0m2.846s 00:23:03.681 user 0m6.951s 00:23:03.681 sys 0m0.469s 00:23:03.681 10:14:34 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:03.681 10:14:34 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:23:03.681 ************************************ 00:23:03.681 END TEST bdev_bounds 00:23:03.681 ************************************ 00:23:03.681 10:14:34 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:23:03.681 10:14:34 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:23:03.681 10:14:34 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:03.681 10:14:34 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:03.681 ************************************ 00:23:03.681 START TEST bdev_nbd 00:23:03.681 ************************************ 00:23:03.681 10:14:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:23:03.681 10:14:34 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:23:03.681 10:14:34 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:23:03.681 10:14:34 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:03.681 10:14:34 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:23:03.681 10:14:34 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:23:03.681 10:14:34 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:23:03.681 10:14:34 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:23:03.681 10:14:34 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:23:03.681 10:14:34 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:23:03.681 10:14:34 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:23:03.681 10:14:34 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:23:03.681 10:14:34 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:23:03.681 10:14:34 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:23:03.940 10:14:34 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:23:03.940 10:14:34 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:23:03.940 10:14:34 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=74516 00:23:03.940 10:14:34 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:23:03.940 10:14:34 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:23:03.940 10:14:34 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 74516 /var/tmp/spdk-nbd.sock 00:23:03.940 10:14:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 74516 ']' 00:23:03.940 10:14:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:23:03.940 10:14:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:03.940 10:14:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:23:03.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:23:03.940 10:14:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:03.940 10:14:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:23:03.940 [2024-12-09 10:14:34.574134] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:23:03.940 [2024-12-09 10:14:34.574907] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:04.199 [2024-12-09 10:14:34.747942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:04.199 [2024-12-09 10:14:34.891651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:04.766 10:14:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:04.766 10:14:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:23:04.766 10:14:35 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:23:04.766 10:14:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:04.766 10:14:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:23:04.766 10:14:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:23:04.766 10:14:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:23:04.766 10:14:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:04.766 10:14:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:23:04.766 10:14:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:23:04.766 10:14:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:23:04.766 10:14:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:23:04.766 10:14:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:23:04.766 10:14:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:23:04.766 10:14:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:23:05.024 10:14:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:23:05.024 10:14:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:23:05.024 10:14:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:23:05.024 10:14:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:23:05.024 10:14:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:05.024 10:14:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:05.024 10:14:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:05.024 10:14:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:23:05.024 10:14:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:05.024 10:14:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:05.024 10:14:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:05.025 10:14:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:05.025 1+0 records in 00:23:05.025 1+0 records out 00:23:05.025 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000367267 s, 11.2 MB/s 00:23:05.025 10:14:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:05.025 10:14:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:05.025 10:14:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:05.025 10:14:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:05.025 10:14:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:05.025 10:14:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:23:05.025 10:14:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:23:05.025 10:14:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:23:05.591 10:14:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:23:05.591 10:14:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:23:05.591 10:14:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:23:05.591 10:14:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:23:05.591 10:14:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:05.591 10:14:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:05.591 10:14:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:05.591 10:14:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:23:05.591 10:14:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:05.591 10:14:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:05.591 10:14:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:05.591 10:14:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:05.591 1+0 records in 00:23:05.591 1+0 records out 00:23:05.591 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000655727 s, 6.2 MB/s 00:23:05.591 10:14:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:05.591 10:14:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:05.591 10:14:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:05.591 10:14:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:05.591 10:14:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:05.591 10:14:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:23:05.591 10:14:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:23:05.591 10:14:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:23:05.850 10:14:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:23:05.850 10:14:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:23:05.850 10:14:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:23:05.850 10:14:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:23:05.850 10:14:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:05.850 10:14:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:05.850 10:14:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:05.850 10:14:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:23:05.850 10:14:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:05.850 10:14:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:05.850 10:14:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:05.850 10:14:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:05.850 1+0 records in 00:23:05.850 1+0 records out 00:23:05.850 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000672347 s, 6.1 MB/s 00:23:05.850 10:14:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:05.850 10:14:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:05.850 10:14:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:05.850 10:14:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:05.850 10:14:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:05.850 10:14:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:23:05.850 10:14:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:23:05.850 10:14:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:23:06.109 10:14:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:23:06.109 10:14:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:23:06.109 10:14:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:23:06.109 10:14:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:23:06.109 10:14:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:06.109 10:14:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:06.109 10:14:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:06.109 10:14:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:23:06.109 10:14:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:06.109 10:14:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:06.109 10:14:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:06.109 10:14:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:06.109 1+0 records in 00:23:06.109 1+0 records out 00:23:06.109 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00207318 s, 2.0 MB/s 00:23:06.109 10:14:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:06.109 10:14:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:06.109 10:14:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:06.109 10:14:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:06.109 10:14:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:06.109 10:14:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:23:06.109 10:14:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:23:06.109 10:14:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:23:06.368 10:14:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:23:06.368 10:14:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:23:06.368 10:14:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:23:06.368 10:14:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:23:06.368 10:14:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:06.368 10:14:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:06.368 10:14:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:06.368 10:14:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:23:06.368 10:14:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:06.368 10:14:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:06.368 10:14:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:06.368 10:14:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:06.368 1+0 records in 00:23:06.368 1+0 records out 00:23:06.368 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000645427 s, 6.3 MB/s 00:23:06.368 10:14:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:06.368 10:14:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:06.368 10:14:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:06.368 10:14:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:06.368 10:14:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:06.368 10:14:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:23:06.368 10:14:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:23:06.368 10:14:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:23:06.627 10:14:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:23:06.627 10:14:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:23:06.627 10:14:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:23:06.627 10:14:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:23:06.627 10:14:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:06.627 10:14:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:06.627 10:14:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:06.627 10:14:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:23:06.627 10:14:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:06.627 10:14:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:06.627 10:14:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:06.627 10:14:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:06.627 1+0 records in 00:23:06.627 1+0 records out 00:23:06.627 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000837746 s, 4.9 MB/s 00:23:06.627 10:14:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:06.627 10:14:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:06.627 10:14:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:06.627 10:14:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:06.627 10:14:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:06.627 10:14:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:23:06.627 10:14:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:23:06.627 10:14:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:07.195 10:14:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:23:07.195 { 00:23:07.195 "nbd_device": "/dev/nbd0", 00:23:07.195 "bdev_name": "nvme0n1" 00:23:07.195 }, 00:23:07.195 { 00:23:07.195 "nbd_device": "/dev/nbd1", 00:23:07.196 "bdev_name": "nvme0n2" 00:23:07.196 }, 00:23:07.196 { 00:23:07.196 "nbd_device": "/dev/nbd2", 00:23:07.196 "bdev_name": "nvme0n3" 00:23:07.196 }, 00:23:07.196 { 00:23:07.196 "nbd_device": "/dev/nbd3", 00:23:07.196 "bdev_name": "nvme1n1" 00:23:07.196 }, 00:23:07.196 { 00:23:07.196 "nbd_device": "/dev/nbd4", 00:23:07.196 "bdev_name": "nvme2n1" 00:23:07.196 }, 00:23:07.196 { 00:23:07.196 "nbd_device": "/dev/nbd5", 00:23:07.196 "bdev_name": "nvme3n1" 00:23:07.196 } 00:23:07.196 ]' 00:23:07.196 10:14:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:23:07.196 10:14:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:23:07.196 { 00:23:07.196 "nbd_device": "/dev/nbd0", 00:23:07.196 "bdev_name": "nvme0n1" 00:23:07.196 }, 00:23:07.196 { 00:23:07.196 "nbd_device": "/dev/nbd1", 00:23:07.196 "bdev_name": "nvme0n2" 00:23:07.196 }, 00:23:07.196 { 00:23:07.196 "nbd_device": "/dev/nbd2", 00:23:07.196 "bdev_name": "nvme0n3" 00:23:07.196 }, 00:23:07.196 { 00:23:07.196 "nbd_device": "/dev/nbd3", 00:23:07.196 "bdev_name": "nvme1n1" 00:23:07.196 }, 00:23:07.196 { 00:23:07.196 "nbd_device": "/dev/nbd4", 00:23:07.196 "bdev_name": "nvme2n1" 00:23:07.196 }, 00:23:07.196 { 00:23:07.196 "nbd_device": "/dev/nbd5", 00:23:07.196 "bdev_name": "nvme3n1" 00:23:07.196 } 00:23:07.196 ]' 00:23:07.196 10:14:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:23:07.196 10:14:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:23:07.196 10:14:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:07.196 10:14:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:23:07.196 10:14:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:07.196 10:14:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:23:07.196 10:14:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:07.196 10:14:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:23:07.455 10:14:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:07.455 10:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:07.455 10:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:07.455 10:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:07.455 10:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:07.455 10:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:07.455 10:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:07.455 10:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:07.455 10:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:07.455 10:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:23:07.726 10:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:07.726 10:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:07.726 10:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:07.726 10:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:07.726 10:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:07.726 10:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:07.726 10:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:07.726 10:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:07.726 10:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:07.726 10:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:23:07.726 10:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:23:07.726 10:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:23:07.726 10:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:23:07.726 10:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:07.726 10:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:07.726 10:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:23:07.726 10:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:07.726 10:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:07.726 10:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:07.726 10:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:23:08.004 10:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:23:08.004 10:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:23:08.004 10:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:23:08.004 10:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:08.004 10:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:08.004 10:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:23:08.004 10:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:08.004 10:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:08.004 10:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:08.004 10:14:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:23:08.263 10:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:23:08.263 10:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:23:08.263 10:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:23:08.263 10:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:08.263 10:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:08.263 10:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:23:08.263 10:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:08.263 10:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:08.263 10:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:08.263 10:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:23:08.522 10:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:23:08.522 10:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:23:08.522 10:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:23:08.522 10:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:08.522 10:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:08.522 10:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:23:08.522 10:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:08.522 10:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:08.522 10:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:23:08.523 10:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:08.523 10:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:08.781 10:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:23:08.781 10:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:23:08.781 10:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:09.041 10:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:23:09.041 10:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:23:09.041 10:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:09.041 10:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:23:09.041 10:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:23:09.041 10:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:23:09.041 10:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:23:09.041 10:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:23:09.041 10:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:23:09.041 10:14:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:23:09.041 10:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:09.041 10:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:23:09.041 10:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:23:09.041 10:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:23:09.041 10:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:23:09.041 10:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:23:09.041 10:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:09.041 10:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:23:09.041 10:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:09.041 10:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:23:09.041 10:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:09.041 10:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:23:09.041 10:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:09.041 10:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:23:09.041 10:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:23:09.300 /dev/nbd0 00:23:09.300 10:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:09.300 10:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:09.300 10:14:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:23:09.300 10:14:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:09.300 10:14:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:09.300 10:14:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:09.300 10:14:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:23:09.300 10:14:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:09.300 10:14:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:09.300 10:14:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:09.300 10:14:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:09.300 1+0 records in 00:23:09.300 1+0 records out 00:23:09.300 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000501258 s, 8.2 MB/s 00:23:09.300 10:14:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:09.300 10:14:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:09.300 10:14:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:09.300 10:14:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:09.300 10:14:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:09.300 10:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:09.300 10:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:23:09.300 10:14:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:23:09.559 /dev/nbd1 00:23:09.559 10:14:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:09.559 10:14:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:09.559 10:14:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:23:09.559 10:14:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:09.559 10:14:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:09.559 10:14:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:09.559 10:14:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:23:09.559 10:14:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:09.559 10:14:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:09.559 10:14:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:09.559 10:14:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:09.559 1+0 records in 00:23:09.559 1+0 records out 00:23:09.559 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000567893 s, 7.2 MB/s 00:23:09.559 10:14:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:09.559 10:14:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:09.559 10:14:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:09.559 10:14:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:09.559 10:14:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:09.559 10:14:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:09.559 10:14:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:23:09.559 10:14:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:23:09.818 /dev/nbd10 00:23:09.818 10:14:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:23:09.818 10:14:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:23:09.818 10:14:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:23:09.818 10:14:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:09.818 10:14:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:09.818 10:14:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:09.818 10:14:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:23:09.818 10:14:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:09.818 10:14:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:09.818 10:14:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:09.818 10:14:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:09.818 1+0 records in 00:23:09.818 1+0 records out 00:23:09.818 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000580305 s, 7.1 MB/s 00:23:09.818 10:14:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:09.818 10:14:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:09.818 10:14:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:09.818 10:14:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:09.818 10:14:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:09.818 10:14:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:09.818 10:14:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:23:09.818 10:14:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:23:10.077 /dev/nbd11 00:23:10.077 10:14:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:23:10.077 10:14:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:23:10.077 10:14:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:23:10.077 10:14:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:10.077 10:14:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:10.077 10:14:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:10.077 10:14:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:23:10.077 10:14:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:10.077 10:14:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:10.077 10:14:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:10.077 10:14:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:10.077 1+0 records in 00:23:10.077 1+0 records out 00:23:10.077 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00087704 s, 4.7 MB/s 00:23:10.077 10:14:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:10.077 10:14:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:10.077 10:14:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:10.077 10:14:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:10.078 10:14:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:10.078 10:14:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:10.078 10:14:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:23:10.078 10:14:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:23:10.337 /dev/nbd12 00:23:10.337 10:14:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:23:10.337 10:14:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:23:10.337 10:14:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:23:10.337 10:14:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:10.337 10:14:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:10.337 10:14:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:10.337 10:14:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:23:10.337 10:14:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:10.337 10:14:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:10.337 10:14:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:10.337 10:14:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:10.337 1+0 records in 00:23:10.337 1+0 records out 00:23:10.337 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000811909 s, 5.0 MB/s 00:23:10.337 10:14:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:10.337 10:14:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:10.337 10:14:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:10.337 10:14:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:10.337 10:14:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:10.337 10:14:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:10.337 10:14:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:23:10.337 10:14:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:23:10.596 /dev/nbd13 00:23:10.855 10:14:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:23:10.855 10:14:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:23:10.855 10:14:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:23:10.855 10:14:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:10.855 10:14:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:10.855 10:14:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:10.855 10:14:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:23:10.855 10:14:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:10.855 10:14:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:10.855 10:14:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:10.855 10:14:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:10.855 1+0 records in 00:23:10.855 1+0 records out 00:23:10.855 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00243643 s, 1.7 MB/s 00:23:10.855 10:14:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:10.855 10:14:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:10.855 10:14:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:10.855 10:14:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:10.855 10:14:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:10.855 10:14:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:10.855 10:14:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:23:10.855 10:14:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:23:10.855 10:14:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:10.855 10:14:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:11.114 10:14:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:23:11.114 { 00:23:11.114 "nbd_device": "/dev/nbd0", 00:23:11.114 "bdev_name": "nvme0n1" 00:23:11.114 }, 00:23:11.114 { 00:23:11.114 "nbd_device": "/dev/nbd1", 00:23:11.114 "bdev_name": "nvme0n2" 00:23:11.114 }, 00:23:11.114 { 00:23:11.114 "nbd_device": "/dev/nbd10", 00:23:11.114 "bdev_name": "nvme0n3" 00:23:11.114 }, 00:23:11.114 { 00:23:11.114 "nbd_device": "/dev/nbd11", 00:23:11.114 "bdev_name": "nvme1n1" 00:23:11.114 }, 00:23:11.114 { 00:23:11.114 "nbd_device": "/dev/nbd12", 00:23:11.114 "bdev_name": "nvme2n1" 00:23:11.114 }, 00:23:11.114 { 00:23:11.114 "nbd_device": "/dev/nbd13", 00:23:11.114 "bdev_name": "nvme3n1" 00:23:11.114 } 00:23:11.114 ]' 00:23:11.114 10:14:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:23:11.114 { 00:23:11.114 "nbd_device": "/dev/nbd0", 00:23:11.114 "bdev_name": "nvme0n1" 00:23:11.114 }, 00:23:11.114 { 00:23:11.114 "nbd_device": "/dev/nbd1", 00:23:11.114 "bdev_name": "nvme0n2" 00:23:11.114 }, 00:23:11.114 { 00:23:11.114 "nbd_device": "/dev/nbd10", 00:23:11.114 "bdev_name": "nvme0n3" 00:23:11.114 }, 00:23:11.114 { 00:23:11.114 "nbd_device": "/dev/nbd11", 00:23:11.114 "bdev_name": "nvme1n1" 00:23:11.114 }, 00:23:11.114 { 00:23:11.114 "nbd_device": "/dev/nbd12", 00:23:11.114 "bdev_name": "nvme2n1" 00:23:11.114 }, 00:23:11.114 { 00:23:11.114 "nbd_device": "/dev/nbd13", 00:23:11.114 "bdev_name": "nvme3n1" 00:23:11.114 } 00:23:11.114 ]' 00:23:11.114 10:14:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:11.114 10:14:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:23:11.114 /dev/nbd1 00:23:11.114 /dev/nbd10 00:23:11.114 /dev/nbd11 00:23:11.114 /dev/nbd12 00:23:11.114 /dev/nbd13' 00:23:11.114 10:14:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:23:11.114 /dev/nbd1 00:23:11.114 /dev/nbd10 00:23:11.114 /dev/nbd11 00:23:11.114 /dev/nbd12 00:23:11.114 /dev/nbd13' 00:23:11.114 10:14:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:11.114 10:14:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:23:11.114 10:14:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:23:11.114 10:14:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:23:11.114 10:14:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:23:11.114 10:14:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:23:11.114 10:14:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:23:11.114 10:14:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:23:11.114 10:14:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:23:11.114 10:14:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:23:11.114 10:14:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:23:11.115 10:14:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:23:11.115 256+0 records in 00:23:11.115 256+0 records out 00:23:11.115 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00870406 s, 120 MB/s 00:23:11.115 10:14:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:11.115 10:14:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:23:11.374 256+0 records in 00:23:11.374 256+0 records out 00:23:11.374 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.167593 s, 6.3 MB/s 00:23:11.374 10:14:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:11.374 10:14:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:23:11.374 256+0 records in 00:23:11.374 256+0 records out 00:23:11.374 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.174722 s, 6.0 MB/s 00:23:11.374 10:14:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:11.374 10:14:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:23:11.632 256+0 records in 00:23:11.632 256+0 records out 00:23:11.632 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.169876 s, 6.2 MB/s 00:23:11.632 10:14:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:11.632 10:14:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:23:11.891 256+0 records in 00:23:11.891 256+0 records out 00:23:11.891 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.164401 s, 6.4 MB/s 00:23:11.891 10:14:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:11.891 10:14:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:23:11.891 256+0 records in 00:23:11.891 256+0 records out 00:23:11.891 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.16602 s, 6.3 MB/s 00:23:11.891 10:14:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:11.891 10:14:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:23:12.150 256+0 records in 00:23:12.150 256+0 records out 00:23:12.150 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.195202 s, 5.4 MB/s 00:23:12.150 10:14:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:23:12.150 10:14:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:23:12.150 10:14:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:23:12.150 10:14:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:23:12.150 10:14:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:23:12.150 10:14:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:23:12.150 10:14:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:23:12.150 10:14:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:12.150 10:14:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:23:12.150 10:14:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:12.150 10:14:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:23:12.150 10:14:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:12.150 10:14:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:23:12.150 10:14:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:12.150 10:14:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:23:12.150 10:14:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:12.150 10:14:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:23:12.150 10:14:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:12.150 10:14:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:23:12.150 10:14:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:23:12.150 10:14:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:23:12.150 10:14:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:12.150 10:14:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:23:12.150 10:14:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:12.150 10:14:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:23:12.150 10:14:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:12.150 10:14:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:23:12.715 10:14:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:12.715 10:14:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:12.715 10:14:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:12.715 10:14:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:12.715 10:14:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:12.715 10:14:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:12.715 10:14:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:12.715 10:14:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:12.715 10:14:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:12.715 10:14:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:23:12.973 10:14:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:12.973 10:14:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:12.973 10:14:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:12.973 10:14:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:12.973 10:14:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:12.973 10:14:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:12.973 10:14:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:12.973 10:14:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:12.973 10:14:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:12.974 10:14:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:23:13.232 10:14:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:23:13.232 10:14:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:23:13.232 10:14:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:23:13.232 10:14:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:13.232 10:14:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:13.232 10:14:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:23:13.232 10:14:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:13.232 10:14:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:13.232 10:14:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:13.232 10:14:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:23:13.491 10:14:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:23:13.491 10:14:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:23:13.491 10:14:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:23:13.491 10:14:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:13.491 10:14:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:13.491 10:14:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:23:13.491 10:14:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:13.491 10:14:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:13.491 10:14:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:13.491 10:14:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:23:13.750 10:14:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:23:13.750 10:14:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:23:13.750 10:14:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:23:13.750 10:14:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:13.750 10:14:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:13.750 10:14:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:23:13.750 10:14:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:13.750 10:14:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:13.750 10:14:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:13.750 10:14:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:23:14.009 10:14:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:23:14.009 10:14:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:23:14.009 10:14:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:23:14.009 10:14:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:14.009 10:14:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:14.009 10:14:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:23:14.009 10:14:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:14.009 10:14:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:14.009 10:14:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:23:14.009 10:14:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:14.009 10:14:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:14.268 10:14:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:23:14.268 10:14:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:14.268 10:14:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:23:14.268 10:14:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:23:14.268 10:14:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:23:14.268 10:14:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:14.268 10:14:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:23:14.268 10:14:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:23:14.268 10:14:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:23:14.268 10:14:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:23:14.268 10:14:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:23:14.268 10:14:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:23:14.268 10:14:45 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:23:14.268 10:14:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:14.268 10:14:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:23:14.268 10:14:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:23:14.835 malloc_lvol_verify 00:23:14.835 10:14:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:23:14.835 d5fb8fc4-333a-4e81-9929-a4285f599df3 00:23:14.835 10:14:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:23:15.094 dd592237-abfb-419b-9ae9-9b51272eeebe 00:23:15.094 10:14:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:23:15.352 /dev/nbd0 00:23:15.352 10:14:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:23:15.352 10:14:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:23:15.352 10:14:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:23:15.352 10:14:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:23:15.352 10:14:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:23:15.611 mke2fs 1.47.0 (5-Feb-2023) 00:23:15.611 Discarding device blocks: 0/4096 done 00:23:15.611 Creating filesystem with 4096 1k blocks and 1024 inodes 00:23:15.612 00:23:15.612 Allocating group tables: 0/1 done 00:23:15.612 Writing inode tables: 0/1 done 00:23:15.612 Creating journal (1024 blocks): done 00:23:15.612 Writing superblocks and filesystem accounting information: 0/1 done 00:23:15.612 00:23:15.612 10:14:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:23:15.612 10:14:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:15.612 10:14:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:15.612 10:14:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:15.612 10:14:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:23:15.612 10:14:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:15.612 10:14:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:23:15.612 10:14:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:15.612 10:14:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:15.612 10:14:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:15.612 10:14:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:15.612 10:14:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:15.612 10:14:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:15.879 10:14:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:15.879 10:14:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:15.879 10:14:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 74516 00:23:15.879 10:14:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 74516 ']' 00:23:15.879 10:14:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 74516 00:23:15.879 10:14:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:23:15.879 10:14:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:15.879 10:14:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74516 00:23:15.879 10:14:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:15.879 10:14:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:15.879 killing process with pid 74516 00:23:15.879 10:14:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74516' 00:23:15.879 10:14:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 74516 00:23:15.879 10:14:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 74516 00:23:16.852 10:14:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:23:16.852 00:23:16.852 real 0m13.064s 00:23:16.852 user 0m18.232s 00:23:16.852 sys 0m4.411s 00:23:16.852 10:14:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:16.852 10:14:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:23:16.852 ************************************ 00:23:16.852 END TEST bdev_nbd 00:23:16.852 ************************************ 00:23:16.852 10:14:47 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:23:16.852 10:14:47 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:23:16.852 10:14:47 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:23:16.852 10:14:47 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:23:16.852 10:14:47 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:16.852 10:14:47 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:16.852 10:14:47 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:16.852 ************************************ 00:23:16.852 START TEST bdev_fio 00:23:16.852 ************************************ 00:23:16.852 10:14:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:23:16.852 10:14:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:23:16.852 10:14:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:23:16.852 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:23:16.852 10:14:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:23:16.852 10:14:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:23:16.852 10:14:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:23:16.852 10:14:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:23:16.852 10:14:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:23:16.852 10:14:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:16.852 10:14:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:23:16.852 10:14:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:23:16.852 10:14:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:23:16.852 10:14:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:23:16.852 10:14:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:23:16.852 10:14:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:23:16.852 10:14:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:23:16.852 10:14:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:16.852 10:14:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:23:16.852 10:14:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:23:16.852 10:14:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:23:16.852 10:14:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:23:16.852 10:14:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:23:17.112 10:14:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:23:17.112 10:14:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:23:17.112 10:14:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:23:17.112 10:14:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:23:17.112 10:14:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:23:17.112 10:14:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:23:17.112 10:14:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:23:17.112 10:14:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:23:17.112 10:14:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:23:17.112 10:14:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:23:17.112 10:14:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:23:17.112 10:14:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:23:17.112 10:14:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:23:17.112 10:14:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:23:17.112 10:14:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:23:17.112 10:14:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:23:17.112 10:14:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:23:17.112 10:14:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:23:17.112 10:14:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:23:17.112 10:14:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:23:17.112 10:14:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:23:17.112 10:14:47 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:23:17.112 10:14:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:23:17.112 10:14:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:17.112 10:14:47 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:23:17.112 ************************************ 00:23:17.112 START TEST bdev_fio_rw_verify 00:23:17.112 ************************************ 00:23:17.112 10:14:47 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:23:17.112 10:14:47 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:23:17.112 10:14:47 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:17.112 10:14:47 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:17.112 10:14:47 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:17.112 10:14:47 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:17.112 10:14:47 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:23:17.112 10:14:47 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:17.112 10:14:47 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:17.112 10:14:47 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:17.112 10:14:47 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:23:17.112 10:14:47 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:17.112 10:14:47 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:23:17.112 10:14:47 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:23:17.112 10:14:47 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:23:17.112 10:14:47 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:17.112 10:14:47 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:23:17.371 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:23:17.371 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:23:17.371 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:23:17.371 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:23:17.371 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:23:17.371 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:23:17.371 fio-3.35 00:23:17.371 Starting 6 threads 00:23:29.583 00:23:29.583 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=74962: Mon Dec 9 10:14:58 2024 00:23:29.583 read: IOPS=29.3k, BW=114MiB/s (120MB/s)(1145MiB/10001msec) 00:23:29.583 slat (usec): min=3, max=1526, avg= 7.62, stdev= 6.57 00:23:29.583 clat (usec): min=117, max=4669, avg=622.90, stdev=228.12 00:23:29.583 lat (usec): min=125, max=4683, avg=630.52, stdev=229.07 00:23:29.583 clat percentiles (usec): 00:23:29.583 | 50.000th=[ 644], 99.000th=[ 1106], 99.900th=[ 1762], 99.990th=[ 3916], 00:23:29.583 | 99.999th=[ 4686] 00:23:29.583 write: IOPS=29.5k, BW=115MiB/s (121MB/s)(1154MiB/10001msec); 0 zone resets 00:23:29.583 slat (usec): min=7, max=1501, avg=27.86, stdev=29.74 00:23:29.583 clat (usec): min=100, max=4387, avg=722.98, stdev=233.35 00:23:29.583 lat (usec): min=114, max=4417, avg=750.84, stdev=235.88 00:23:29.583 clat percentiles (usec): 00:23:29.583 | 50.000th=[ 742], 99.000th=[ 1303], 99.900th=[ 1827], 99.990th=[ 2507], 00:23:29.583 | 99.999th=[ 4359] 00:23:29.583 bw ( KiB/s): min=99345, max=143460, per=100.00%, avg=118701.58, stdev=2236.54, samples=114 00:23:29.583 iops : min=24834, max=35864, avg=29674.89, stdev=559.13, samples=114 00:23:29.583 lat (usec) : 250=3.15%, 500=21.06%, 750=36.48%, 1000=33.20% 00:23:29.583 lat (msec) : 2=6.04%, 4=0.06%, 10=0.01% 00:23:29.583 cpu : usr=57.79%, sys=27.82%, ctx=8796, majf=0, minf=24845 00:23:29.583 IO depths : 1=11.8%, 2=24.2%, 4=50.8%, 8=13.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:29.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:29.583 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:29.583 issued rwts: total=293066,295394,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:29.583 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:29.583 00:23:29.583 Run status group 0 (all jobs): 00:23:29.583 READ: bw=114MiB/s (120MB/s), 114MiB/s-114MiB/s (120MB/s-120MB/s), io=1145MiB (1200MB), run=10001-10001msec 00:23:29.583 WRITE: bw=115MiB/s (121MB/s), 115MiB/s-115MiB/s (121MB/s-121MB/s), io=1154MiB (1210MB), run=10001-10001msec 00:23:29.583 ----------------------------------------------------- 00:23:29.583 Suppressions used: 00:23:29.583 count bytes template 00:23:29.583 6 48 /usr/src/fio/parse.c 00:23:29.583 2136 205056 /usr/src/fio/iolog.c 00:23:29.583 1 8 libtcmalloc_minimal.so 00:23:29.583 1 904 libcrypto.so 00:23:29.583 ----------------------------------------------------- 00:23:29.583 00:23:29.583 00:23:29.583 real 0m12.519s 00:23:29.583 user 0m36.646s 00:23:29.583 sys 0m17.147s 00:23:29.583 10:15:00 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:29.583 ************************************ 00:23:29.583 END TEST bdev_fio_rw_verify 00:23:29.583 ************************************ 00:23:29.583 10:15:00 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:23:29.583 10:15:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:23:29.583 10:15:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:29.583 10:15:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:23:29.583 10:15:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:29.583 10:15:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:23:29.583 10:15:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:23:29.583 10:15:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:23:29.583 10:15:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:23:29.584 10:15:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:23:29.584 10:15:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:23:29.584 10:15:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:23:29.584 10:15:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:29.584 10:15:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:23:29.584 10:15:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:23:29.584 10:15:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:23:29.584 10:15:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:23:29.584 10:15:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:23:29.584 10:15:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "6dbc27ea-18ce-4c07-b0db-f6c8f4d786be"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6dbc27ea-18ce-4c07-b0db-f6c8f4d786be",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "84a97370-4f72-4dfb-8619-c25c3f4f947d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "84a97370-4f72-4dfb-8619-c25c3f4f947d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "b7b68f86-ea12-4145-980e-cff906749b6f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b7b68f86-ea12-4145-980e-cff906749b6f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "7145743d-73cc-4889-b6b0-d91288d7ecb7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "7145743d-73cc-4889-b6b0-d91288d7ecb7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "94af174a-2a4b-44f7-a2a0-d32b1510eb6a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "94af174a-2a4b-44f7-a2a0-d32b1510eb6a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "73256b0e-503b-4ef1-862d-a69cbac83422"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "73256b0e-503b-4ef1-862d-a69cbac83422",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:23:29.584 10:15:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:23:29.584 10:15:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:29.584 /home/vagrant/spdk_repo/spdk 00:23:29.584 10:15:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:23:29.584 10:15:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:23:29.584 10:15:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:23:29.584 00:23:29.584 real 0m12.697s 00:23:29.584 user 0m36.740s 00:23:29.584 sys 0m17.231s 00:23:29.584 10:15:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:29.584 ************************************ 00:23:29.584 END TEST bdev_fio 00:23:29.584 ************************************ 00:23:29.584 10:15:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:23:29.584 10:15:00 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:29.584 10:15:00 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:23:29.584 10:15:00 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:23:29.584 10:15:00 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:29.584 10:15:00 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:29.584 ************************************ 00:23:29.584 START TEST bdev_verify 00:23:29.584 ************************************ 00:23:29.584 10:15:00 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:23:29.842 [2024-12-09 10:15:00.425401] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:23:29.842 [2024-12-09 10:15:00.425608] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75126 ] 00:23:29.842 [2024-12-09 10:15:00.606562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:30.115 [2024-12-09 10:15:00.764195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:30.115 [2024-12-09 10:15:00.764211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:30.681 Running I/O for 5 seconds... 00:23:32.626 23456.00 IOPS, 91.62 MiB/s [2024-12-09T10:15:04.813Z] 24496.00 IOPS, 95.69 MiB/s [2024-12-09T10:15:05.800Z] 24949.33 IOPS, 97.46 MiB/s [2024-12-09T10:15:06.739Z] 25032.00 IOPS, 97.78 MiB/s [2024-12-09T10:15:06.739Z] 24492.80 IOPS, 95.68 MiB/s 00:23:35.942 Latency(us) 00:23:35.942 [2024-12-09T10:15:06.739Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:35.942 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:35.942 Verification LBA range: start 0x0 length 0x80000 00:23:35.942 nvme0n1 : 5.08 1787.66 6.98 0.00 0.00 71474.88 12690.15 65297.69 00:23:35.942 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:35.942 Verification LBA range: start 0x80000 length 0x80000 00:23:35.942 nvme0n1 : 5.08 1814.54 7.09 0.00 0.00 70420.09 6851.49 75783.45 00:23:35.942 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:35.942 Verification LBA range: start 0x0 length 0x80000 00:23:35.942 nvme0n2 : 5.07 1791.95 7.00 0.00 0.00 71172.33 15609.48 69110.69 00:23:35.942 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:35.942 Verification LBA range: start 0x80000 length 0x80000 00:23:35.942 nvme0n2 : 5.07 1793.27 7.00 0.00 0.00 71136.80 10724.07 70540.57 00:23:35.942 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:35.942 Verification LBA range: start 0x0 length 0x80000 00:23:35.942 nvme0n3 : 5.08 1790.63 6.99 0.00 0.00 71097.81 10604.92 65774.31 00:23:35.942 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:35.942 Verification LBA range: start 0x80000 length 0x80000 00:23:35.942 nvme0n3 : 5.07 1792.64 7.00 0.00 0.00 71032.47 15490.33 64344.44 00:23:35.943 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:35.943 Verification LBA range: start 0x0 length 0x20000 00:23:35.943 nvme1n1 : 5.09 1786.05 6.98 0.00 0.00 71154.60 17277.67 67680.81 00:23:35.943 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:35.943 Verification LBA range: start 0x20000 length 0x20000 00:23:35.943 nvme1n1 : 5.07 1791.89 7.00 0.00 0.00 70930.76 17992.61 60769.75 00:23:35.943 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:35.943 Verification LBA range: start 0x0 length 0xa0000 00:23:35.943 nvme2n1 : 5.09 1785.43 6.97 0.00 0.00 71046.58 10843.23 71970.44 00:23:35.943 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:35.943 Verification LBA range: start 0xa0000 length 0xa0000 00:23:35.943 nvme2n1 : 5.08 1790.55 6.99 0.00 0.00 70870.71 10724.07 69587.32 00:23:35.943 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:35.943 Verification LBA range: start 0x0 length 0xbd0bd 00:23:35.943 nvme3n1 : 5.10 3213.80 12.55 0.00 0.00 39327.84 4051.32 58624.93 00:23:35.943 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:35.943 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:23:35.943 nvme3n1 : 5.07 3051.12 11.92 0.00 0.00 41494.62 3872.58 72447.07 00:23:35.943 [2024-12-09T10:15:06.740Z] =================================================================================================================== 00:23:35.943 [2024-12-09T10:15:06.740Z] Total : 24189.53 94.49 0.00 0.00 63088.08 3872.58 75783.45 00:23:36.881 00:23:36.881 real 0m7.233s 00:23:36.881 user 0m11.228s 00:23:36.881 sys 0m1.920s 00:23:36.881 10:15:07 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:36.881 10:15:07 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:23:36.881 ************************************ 00:23:36.881 END TEST bdev_verify 00:23:36.881 ************************************ 00:23:36.881 10:15:07 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:23:36.881 10:15:07 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:23:36.881 10:15:07 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:36.881 10:15:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:36.881 ************************************ 00:23:36.881 START TEST bdev_verify_big_io 00:23:36.881 ************************************ 00:23:36.881 10:15:07 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:23:37.140 [2024-12-09 10:15:07.710180] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:23:37.140 [2024-12-09 10:15:07.710364] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75225 ] 00:23:37.140 [2024-12-09 10:15:07.885270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:37.399 [2024-12-09 10:15:08.022591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:37.399 [2024-12-09 10:15:08.022612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:37.966 Running I/O for 5 seconds... 00:23:43.785 1792.00 IOPS, 112.00 MiB/s [2024-12-09T10:15:14.582Z] 3675.00 IOPS, 229.69 MiB/s 00:23:43.785 Latency(us) 00:23:43.785 [2024-12-09T10:15:14.582Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:43.785 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:23:43.785 Verification LBA range: start 0x0 length 0x8000 00:23:43.785 nvme0n1 : 5.83 98.74 6.17 0.00 0.00 1237637.43 55526.87 2760614.63 00:23:43.785 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:23:43.785 Verification LBA range: start 0x8000 length 0x8000 00:23:43.785 nvme0n1 : 5.85 153.07 9.57 0.00 0.00 806713.05 99138.09 892242.85 00:23:43.785 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:23:43.785 Verification LBA range: start 0x0 length 0x8000 00:23:43.785 nvme0n2 : 5.76 150.10 9.38 0.00 0.00 808993.37 81026.33 865551.83 00:23:43.785 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:23:43.785 Verification LBA range: start 0x8000 length 0x8000 00:23:43.785 nvme0n2 : 5.86 163.69 10.23 0.00 0.00 747461.41 37176.79 941811.90 00:23:43.785 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:23:43.785 Verification LBA range: start 0x0 length 0x8000 00:23:43.785 nvme0n3 : 5.68 146.57 9.16 0.00 0.00 807628.37 59101.56 1662469.59 00:23:43.785 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:23:43.785 Verification LBA range: start 0x8000 length 0x8000 00:23:43.785 nvme0n3 : 5.83 148.21 9.26 0.00 0.00 798473.96 108193.98 819795.78 00:23:43.785 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:23:43.785 Verification LBA range: start 0x0 length 0x2000 00:23:43.785 nvme1n1 : 5.85 128.51 8.03 0.00 0.00 895181.51 89605.59 2120030.02 00:23:43.785 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:23:43.785 Verification LBA range: start 0x2000 length 0x2000 00:23:43.785 nvme1n1 : 5.87 106.33 6.65 0.00 0.00 1089933.19 34317.03 2196290.09 00:23:43.785 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:23:43.785 Verification LBA range: start 0x0 length 0xa000 00:23:43.785 nvme2n1 : 5.84 142.51 8.91 0.00 0.00 780584.17 122016.12 1799737.72 00:23:43.785 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:23:43.785 Verification LBA range: start 0xa000 length 0xa000 00:23:43.785 nvme2n1 : 5.87 138.62 8.66 0.00 0.00 813819.78 13941.29 2516582.40 00:23:43.785 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:23:43.785 Verification LBA range: start 0x0 length 0xbd0b 00:23:43.785 nvme3n1 : 5.86 218.38 13.65 0.00 0.00 500188.23 6166.34 819795.78 00:23:43.785 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:23:43.785 Verification LBA range: start 0xbd0b length 0xbd0b 00:23:43.785 nvme3n1 : 5.86 196.53 12.28 0.00 0.00 559333.83 9234.62 663462.63 00:23:43.785 [2024-12-09T10:15:14.582Z] =================================================================================================================== 00:23:43.785 [2024-12-09T10:15:14.582Z] Total : 1791.27 111.95 0.00 0.00 781527.78 6166.34 2760614.63 00:23:45.165 00:23:45.165 real 0m8.296s 00:23:45.165 user 0m14.870s 00:23:45.165 sys 0m0.655s 00:23:45.165 10:15:15 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:45.165 10:15:15 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:23:45.165 ************************************ 00:23:45.165 END TEST bdev_verify_big_io 00:23:45.165 ************************************ 00:23:45.472 10:15:15 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:45.472 10:15:15 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:23:45.472 10:15:15 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:45.472 10:15:15 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:45.472 ************************************ 00:23:45.472 START TEST bdev_write_zeroes 00:23:45.472 ************************************ 00:23:45.472 10:15:15 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:45.472 [2024-12-09 10:15:16.066463] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:23:45.472 [2024-12-09 10:15:16.067426] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75339 ] 00:23:45.472 [2024-12-09 10:15:16.249090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:45.737 [2024-12-09 10:15:16.411797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:46.303 Running I/O for 1 seconds... 00:23:47.239 64384.00 IOPS, 251.50 MiB/s 00:23:47.239 Latency(us) 00:23:47.239 [2024-12-09T10:15:18.036Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:47.239 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:47.239 nvme0n1 : 1.02 9490.92 37.07 0.00 0.00 13472.13 7983.48 29193.31 00:23:47.239 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:47.239 nvme0n2 : 1.03 9477.08 37.02 0.00 0.00 13477.47 7983.48 29550.78 00:23:47.239 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:47.240 nvme0n3 : 1.03 9462.89 36.96 0.00 0.00 13483.66 7983.48 30027.40 00:23:47.240 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:47.240 nvme1n1 : 1.03 9449.39 36.91 0.00 0.00 13489.09 8043.05 30384.87 00:23:47.240 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:47.240 nvme2n1 : 1.03 9435.74 36.86 0.00 0.00 13494.32 8102.63 30980.65 00:23:47.240 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:47.240 nvme3n1 : 1.04 16031.63 62.62 0.00 0.00 7930.83 3351.27 29669.93 00:23:47.240 [2024-12-09T10:15:18.037Z] =================================================================================================================== 00:23:47.240 [2024-12-09T10:15:18.037Z] Total : 63347.64 247.45 0.00 0.00 12070.02 3351.27 30980.65 00:23:48.629 00:23:48.629 real 0m3.109s 00:23:48.629 user 0m2.257s 00:23:48.629 sys 0m0.661s 00:23:48.629 10:15:19 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:48.629 10:15:19 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:23:48.629 ************************************ 00:23:48.629 END TEST bdev_write_zeroes 00:23:48.629 ************************************ 00:23:48.629 10:15:19 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:48.629 10:15:19 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:23:48.629 10:15:19 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:48.629 10:15:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:48.629 ************************************ 00:23:48.629 START TEST bdev_json_nonenclosed 00:23:48.629 ************************************ 00:23:48.629 10:15:19 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:48.629 [2024-12-09 10:15:19.227070] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:23:48.629 [2024-12-09 10:15:19.227219] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75405 ] 00:23:48.629 [2024-12-09 10:15:19.400002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:48.888 [2024-12-09 10:15:19.533123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:48.888 [2024-12-09 10:15:19.533280] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:23:48.888 [2024-12-09 10:15:19.533308] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:23:48.888 [2024-12-09 10:15:19.533323] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:49.147 00:23:49.147 real 0m0.722s 00:23:49.147 user 0m0.463s 00:23:49.147 sys 0m0.153s 00:23:49.147 10:15:19 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:49.147 10:15:19 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:23:49.147 ************************************ 00:23:49.147 END TEST bdev_json_nonenclosed 00:23:49.147 ************************************ 00:23:49.147 10:15:19 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:49.147 10:15:19 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:23:49.147 10:15:19 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:49.147 10:15:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:49.147 ************************************ 00:23:49.147 START TEST bdev_json_nonarray 00:23:49.147 ************************************ 00:23:49.147 10:15:19 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:49.438 [2024-12-09 10:15:20.029946] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:23:49.438 [2024-12-09 10:15:20.030151] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75429 ] 00:23:49.438 [2024-12-09 10:15:20.224589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:49.696 [2024-12-09 10:15:20.350972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:49.696 [2024-12-09 10:15:20.351121] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:23:49.696 [2024-12-09 10:15:20.351150] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:23:49.696 [2024-12-09 10:15:20.351164] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:49.954 00:23:49.954 real 0m0.763s 00:23:49.954 user 0m0.491s 00:23:49.954 sys 0m0.165s 00:23:49.954 10:15:20 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:49.954 10:15:20 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:23:49.954 ************************************ 00:23:49.954 END TEST bdev_json_nonarray 00:23:49.954 ************************************ 00:23:49.954 10:15:20 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:23:49.954 10:15:20 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:23:49.954 10:15:20 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:23:49.954 10:15:20 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:23:49.954 10:15:20 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:23:49.954 10:15:20 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:23:49.954 10:15:20 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:23:49.954 10:15:20 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:23:49.954 10:15:20 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:23:49.954 10:15:20 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:23:49.954 10:15:20 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:23:49.954 10:15:20 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:50.522 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:51.458 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:23:51.458 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:23:52.025 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:23:52.025 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:23:52.025 ************************************ 00:23:52.025 END TEST blockdev_xnvme 00:23:52.025 ************************************ 00:23:52.025 00:23:52.025 real 0m58.197s 00:23:52.025 user 1m38.933s 00:23:52.025 sys 0m29.003s 00:23:52.025 10:15:22 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:52.025 10:15:22 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:52.025 10:15:22 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:23:52.025 10:15:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:52.025 10:15:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:52.025 10:15:22 -- common/autotest_common.sh@10 -- # set +x 00:23:52.025 ************************************ 00:23:52.025 START TEST ublk 00:23:52.025 ************************************ 00:23:52.025 10:15:22 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:23:52.284 * Looking for test storage... 00:23:52.284 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:23:52.284 10:15:22 ublk -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:52.284 10:15:22 ublk -- common/autotest_common.sh@1711 -- # lcov --version 00:23:52.284 10:15:22 ublk -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:52.284 10:15:22 ublk -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:52.284 10:15:22 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:52.284 10:15:22 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:52.284 10:15:22 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:52.284 10:15:22 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:23:52.284 10:15:22 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:23:52.284 10:15:22 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:23:52.284 10:15:22 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:23:52.284 10:15:22 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:23:52.284 10:15:22 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:23:52.284 10:15:22 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:23:52.284 10:15:22 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:52.284 10:15:22 ublk -- scripts/common.sh@344 -- # case "$op" in 00:23:52.284 10:15:22 ublk -- scripts/common.sh@345 -- # : 1 00:23:52.284 10:15:22 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:52.284 10:15:22 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:52.284 10:15:22 ublk -- scripts/common.sh@365 -- # decimal 1 00:23:52.284 10:15:22 ublk -- scripts/common.sh@353 -- # local d=1 00:23:52.284 10:15:22 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:52.284 10:15:22 ublk -- scripts/common.sh@355 -- # echo 1 00:23:52.284 10:15:22 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:23:52.284 10:15:22 ublk -- scripts/common.sh@366 -- # decimal 2 00:23:52.284 10:15:22 ublk -- scripts/common.sh@353 -- # local d=2 00:23:52.284 10:15:22 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:52.284 10:15:22 ublk -- scripts/common.sh@355 -- # echo 2 00:23:52.284 10:15:22 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:23:52.284 10:15:22 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:52.284 10:15:22 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:52.284 10:15:22 ublk -- scripts/common.sh@368 -- # return 0 00:23:52.284 10:15:22 ublk -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:52.284 10:15:22 ublk -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:52.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:52.284 --rc genhtml_branch_coverage=1 00:23:52.284 --rc genhtml_function_coverage=1 00:23:52.284 --rc genhtml_legend=1 00:23:52.284 --rc geninfo_all_blocks=1 00:23:52.285 --rc geninfo_unexecuted_blocks=1 00:23:52.285 00:23:52.285 ' 00:23:52.285 10:15:22 ublk -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:52.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:52.285 --rc genhtml_branch_coverage=1 00:23:52.285 --rc genhtml_function_coverage=1 00:23:52.285 --rc genhtml_legend=1 00:23:52.285 --rc geninfo_all_blocks=1 00:23:52.285 --rc geninfo_unexecuted_blocks=1 00:23:52.285 00:23:52.285 ' 00:23:52.285 10:15:22 ublk -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:52.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:52.285 --rc genhtml_branch_coverage=1 00:23:52.285 --rc genhtml_function_coverage=1 00:23:52.285 --rc genhtml_legend=1 00:23:52.285 --rc geninfo_all_blocks=1 00:23:52.285 --rc geninfo_unexecuted_blocks=1 00:23:52.285 00:23:52.285 ' 00:23:52.285 10:15:22 ublk -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:52.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:52.285 --rc genhtml_branch_coverage=1 00:23:52.285 --rc genhtml_function_coverage=1 00:23:52.285 --rc genhtml_legend=1 00:23:52.285 --rc geninfo_all_blocks=1 00:23:52.285 --rc geninfo_unexecuted_blocks=1 00:23:52.285 00:23:52.285 ' 00:23:52.285 10:15:22 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:23:52.285 10:15:22 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:23:52.285 10:15:22 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:23:52.285 10:15:22 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:23:52.285 10:15:22 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:23:52.285 10:15:22 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:23:52.285 10:15:22 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:23:52.285 10:15:22 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:23:52.285 10:15:22 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:23:52.285 10:15:22 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:23:52.285 10:15:22 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:23:52.285 10:15:22 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:23:52.285 10:15:22 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:23:52.285 10:15:22 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:23:52.285 10:15:22 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:23:52.285 10:15:22 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:23:52.285 10:15:22 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:23:52.285 10:15:22 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:23:52.285 10:15:22 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:23:52.285 10:15:22 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:23:52.285 10:15:22 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:52.285 10:15:22 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:52.285 10:15:22 ublk -- common/autotest_common.sh@10 -- # set +x 00:23:52.285 ************************************ 00:23:52.285 START TEST test_save_ublk_config 00:23:52.285 ************************************ 00:23:52.285 10:15:22 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:23:52.285 10:15:22 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:23:52.285 10:15:22 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=75720 00:23:52.285 10:15:22 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:23:52.285 10:15:22 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 75720 00:23:52.285 10:15:22 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:23:52.285 10:15:23 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75720 ']' 00:23:52.285 10:15:23 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:52.285 10:15:23 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:52.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:52.285 10:15:23 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:52.285 10:15:23 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:52.285 10:15:23 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:23:52.544 [2024-12-09 10:15:23.132651] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:23:52.544 [2024-12-09 10:15:23.132884] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75720 ] 00:23:52.544 [2024-12-09 10:15:23.326661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.802 [2024-12-09 10:15:23.492687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:53.737 10:15:24 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:53.737 10:15:24 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:23:53.737 10:15:24 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:23:53.737 10:15:24 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:23:53.737 10:15:24 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.737 10:15:24 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:23:53.737 [2024-12-09 10:15:24.498927] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:23:53.737 [2024-12-09 10:15:24.500183] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:23:53.996 malloc0 00:23:53.996 [2024-12-09 10:15:24.582052] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:23:53.996 [2024-12-09 10:15:24.582185] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:23:53.996 [2024-12-09 10:15:24.582205] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:23:53.996 [2024-12-09 10:15:24.582214] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:23:53.996 [2024-12-09 10:15:24.590062] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:23:53.996 [2024-12-09 10:15:24.590094] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:23:53.996 [2024-12-09 10:15:24.596890] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:23:53.996 [2024-12-09 10:15:24.597040] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:23:53.996 [2024-12-09 10:15:24.610973] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:23:53.996 0 00:23:53.996 10:15:24 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.996 10:15:24 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:23:53.996 10:15:24 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.996 10:15:24 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:23:54.255 10:15:24 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.255 10:15:24 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:23:54.255 "subsystems": [ 00:23:54.255 { 00:23:54.255 "subsystem": "fsdev", 00:23:54.255 "config": [ 00:23:54.255 { 00:23:54.255 "method": "fsdev_set_opts", 00:23:54.255 "params": { 00:23:54.255 "fsdev_io_pool_size": 65535, 00:23:54.255 "fsdev_io_cache_size": 256 00:23:54.255 } 00:23:54.255 } 00:23:54.255 ] 00:23:54.255 }, 00:23:54.255 { 00:23:54.255 "subsystem": "keyring", 00:23:54.255 "config": [] 00:23:54.255 }, 00:23:54.255 { 00:23:54.255 "subsystem": "iobuf", 00:23:54.255 "config": [ 00:23:54.255 { 00:23:54.255 "method": "iobuf_set_options", 00:23:54.255 "params": { 00:23:54.255 "small_pool_count": 8192, 00:23:54.255 "large_pool_count": 1024, 00:23:54.255 "small_bufsize": 8192, 00:23:54.255 "large_bufsize": 135168, 00:23:54.255 "enable_numa": false 00:23:54.255 } 00:23:54.255 } 00:23:54.255 ] 00:23:54.255 }, 00:23:54.255 { 00:23:54.255 "subsystem": "sock", 00:23:54.255 "config": [ 00:23:54.255 { 00:23:54.255 "method": "sock_set_default_impl", 00:23:54.255 "params": { 00:23:54.255 "impl_name": "posix" 00:23:54.255 } 00:23:54.255 }, 00:23:54.255 { 00:23:54.255 "method": "sock_impl_set_options", 00:23:54.255 "params": { 00:23:54.255 "impl_name": "ssl", 00:23:54.255 "recv_buf_size": 4096, 00:23:54.255 "send_buf_size": 4096, 00:23:54.255 "enable_recv_pipe": true, 00:23:54.255 "enable_quickack": false, 00:23:54.255 "enable_placement_id": 0, 00:23:54.255 "enable_zerocopy_send_server": true, 00:23:54.255 "enable_zerocopy_send_client": false, 00:23:54.255 "zerocopy_threshold": 0, 00:23:54.255 "tls_version": 0, 00:23:54.255 "enable_ktls": false 00:23:54.255 } 00:23:54.255 }, 00:23:54.255 { 00:23:54.255 "method": "sock_impl_set_options", 00:23:54.255 "params": { 00:23:54.255 "impl_name": "posix", 00:23:54.255 "recv_buf_size": 2097152, 00:23:54.255 "send_buf_size": 2097152, 00:23:54.255 "enable_recv_pipe": true, 00:23:54.255 "enable_quickack": false, 00:23:54.255 "enable_placement_id": 0, 00:23:54.255 "enable_zerocopy_send_server": true, 00:23:54.255 "enable_zerocopy_send_client": false, 00:23:54.255 "zerocopy_threshold": 0, 00:23:54.255 "tls_version": 0, 00:23:54.255 "enable_ktls": false 00:23:54.255 } 00:23:54.255 } 00:23:54.255 ] 00:23:54.255 }, 00:23:54.255 { 00:23:54.255 "subsystem": "vmd", 00:23:54.255 "config": [] 00:23:54.255 }, 00:23:54.255 { 00:23:54.255 "subsystem": "accel", 00:23:54.255 "config": [ 00:23:54.255 { 00:23:54.255 "method": "accel_set_options", 00:23:54.255 "params": { 00:23:54.255 "small_cache_size": 128, 00:23:54.255 "large_cache_size": 16, 00:23:54.255 "task_count": 2048, 00:23:54.255 "sequence_count": 2048, 00:23:54.255 "buf_count": 2048 00:23:54.255 } 00:23:54.255 } 00:23:54.255 ] 00:23:54.255 }, 00:23:54.255 { 00:23:54.255 "subsystem": "bdev", 00:23:54.255 "config": [ 00:23:54.255 { 00:23:54.255 "method": "bdev_set_options", 00:23:54.255 "params": { 00:23:54.255 "bdev_io_pool_size": 65535, 00:23:54.255 "bdev_io_cache_size": 256, 00:23:54.255 "bdev_auto_examine": true, 00:23:54.255 "iobuf_small_cache_size": 128, 00:23:54.255 "iobuf_large_cache_size": 16 00:23:54.255 } 00:23:54.255 }, 00:23:54.255 { 00:23:54.255 "method": "bdev_raid_set_options", 00:23:54.255 "params": { 00:23:54.255 "process_window_size_kb": 1024, 00:23:54.255 "process_max_bandwidth_mb_sec": 0 00:23:54.255 } 00:23:54.255 }, 00:23:54.255 { 00:23:54.255 "method": "bdev_iscsi_set_options", 00:23:54.255 "params": { 00:23:54.255 "timeout_sec": 30 00:23:54.255 } 00:23:54.255 }, 00:23:54.255 { 00:23:54.255 "method": "bdev_nvme_set_options", 00:23:54.255 "params": { 00:23:54.255 "action_on_timeout": "none", 00:23:54.255 "timeout_us": 0, 00:23:54.255 "timeout_admin_us": 0, 00:23:54.255 "keep_alive_timeout_ms": 10000, 00:23:54.255 "arbitration_burst": 0, 00:23:54.255 "low_priority_weight": 0, 00:23:54.255 "medium_priority_weight": 0, 00:23:54.255 "high_priority_weight": 0, 00:23:54.255 "nvme_adminq_poll_period_us": 10000, 00:23:54.255 "nvme_ioq_poll_period_us": 0, 00:23:54.255 "io_queue_requests": 0, 00:23:54.255 "delay_cmd_submit": true, 00:23:54.255 "transport_retry_count": 4, 00:23:54.255 "bdev_retry_count": 3, 00:23:54.255 "transport_ack_timeout": 0, 00:23:54.255 "ctrlr_loss_timeout_sec": 0, 00:23:54.255 "reconnect_delay_sec": 0, 00:23:54.255 "fast_io_fail_timeout_sec": 0, 00:23:54.255 "disable_auto_failback": false, 00:23:54.255 "generate_uuids": false, 00:23:54.255 "transport_tos": 0, 00:23:54.256 "nvme_error_stat": false, 00:23:54.256 "rdma_srq_size": 0, 00:23:54.256 "io_path_stat": false, 00:23:54.256 "allow_accel_sequence": false, 00:23:54.256 "rdma_max_cq_size": 0, 00:23:54.256 "rdma_cm_event_timeout_ms": 0, 00:23:54.256 "dhchap_digests": [ 00:23:54.256 "sha256", 00:23:54.256 "sha384", 00:23:54.256 "sha512" 00:23:54.256 ], 00:23:54.256 "dhchap_dhgroups": [ 00:23:54.256 "null", 00:23:54.256 "ffdhe2048", 00:23:54.256 "ffdhe3072", 00:23:54.256 "ffdhe4096", 00:23:54.256 "ffdhe6144", 00:23:54.256 "ffdhe8192" 00:23:54.256 ] 00:23:54.256 } 00:23:54.256 }, 00:23:54.256 { 00:23:54.256 "method": "bdev_nvme_set_hotplug", 00:23:54.256 "params": { 00:23:54.256 "period_us": 100000, 00:23:54.256 "enable": false 00:23:54.256 } 00:23:54.256 }, 00:23:54.256 { 00:23:54.256 "method": "bdev_malloc_create", 00:23:54.256 "params": { 00:23:54.256 "name": "malloc0", 00:23:54.256 "num_blocks": 8192, 00:23:54.256 "block_size": 4096, 00:23:54.256 "physical_block_size": 4096, 00:23:54.256 "uuid": "96535172-c381-4eb3-99ca-25813eafea11", 00:23:54.256 "optimal_io_boundary": 0, 00:23:54.256 "md_size": 0, 00:23:54.256 "dif_type": 0, 00:23:54.256 "dif_is_head_of_md": false, 00:23:54.256 "dif_pi_format": 0 00:23:54.256 } 00:23:54.256 }, 00:23:54.256 { 00:23:54.256 "method": "bdev_wait_for_examine" 00:23:54.256 } 00:23:54.256 ] 00:23:54.256 }, 00:23:54.256 { 00:23:54.256 "subsystem": "scsi", 00:23:54.256 "config": null 00:23:54.256 }, 00:23:54.256 { 00:23:54.256 "subsystem": "scheduler", 00:23:54.256 "config": [ 00:23:54.256 { 00:23:54.256 "method": "framework_set_scheduler", 00:23:54.256 "params": { 00:23:54.256 "name": "static" 00:23:54.256 } 00:23:54.256 } 00:23:54.256 ] 00:23:54.256 }, 00:23:54.256 { 00:23:54.256 "subsystem": "vhost_scsi", 00:23:54.256 "config": [] 00:23:54.256 }, 00:23:54.256 { 00:23:54.256 "subsystem": "vhost_blk", 00:23:54.256 "config": [] 00:23:54.256 }, 00:23:54.256 { 00:23:54.256 "subsystem": "ublk", 00:23:54.256 "config": [ 00:23:54.256 { 00:23:54.256 "method": "ublk_create_target", 00:23:54.256 "params": { 00:23:54.256 "cpumask": "1" 00:23:54.256 } 00:23:54.256 }, 00:23:54.256 { 00:23:54.256 "method": "ublk_start_disk", 00:23:54.256 "params": { 00:23:54.256 "bdev_name": "malloc0", 00:23:54.256 "ublk_id": 0, 00:23:54.256 "num_queues": 1, 00:23:54.256 "queue_depth": 128 00:23:54.256 } 00:23:54.256 } 00:23:54.256 ] 00:23:54.256 }, 00:23:54.256 { 00:23:54.256 "subsystem": "nbd", 00:23:54.256 "config": [] 00:23:54.256 }, 00:23:54.256 { 00:23:54.256 "subsystem": "nvmf", 00:23:54.256 "config": [ 00:23:54.256 { 00:23:54.256 "method": "nvmf_set_config", 00:23:54.256 "params": { 00:23:54.256 "discovery_filter": "match_any", 00:23:54.256 "admin_cmd_passthru": { 00:23:54.256 "identify_ctrlr": false 00:23:54.256 }, 00:23:54.256 "dhchap_digests": [ 00:23:54.256 "sha256", 00:23:54.256 "sha384", 00:23:54.256 "sha512" 00:23:54.256 ], 00:23:54.256 "dhchap_dhgroups": [ 00:23:54.256 "null", 00:23:54.256 "ffdhe2048", 00:23:54.256 "ffdhe3072", 00:23:54.256 "ffdhe4096", 00:23:54.256 "ffdhe6144", 00:23:54.256 "ffdhe8192" 00:23:54.256 ] 00:23:54.256 } 00:23:54.256 }, 00:23:54.256 { 00:23:54.256 "method": "nvmf_set_max_subsystems", 00:23:54.256 "params": { 00:23:54.256 "max_subsystems": 1024 00:23:54.256 } 00:23:54.256 }, 00:23:54.256 { 00:23:54.256 "method": "nvmf_set_crdt", 00:23:54.256 "params": { 00:23:54.256 "crdt1": 0, 00:23:54.256 "crdt2": 0, 00:23:54.256 "crdt3": 0 00:23:54.256 } 00:23:54.256 } 00:23:54.256 ] 00:23:54.256 }, 00:23:54.256 { 00:23:54.256 "subsystem": "iscsi", 00:23:54.256 "config": [ 00:23:54.256 { 00:23:54.256 "method": "iscsi_set_options", 00:23:54.256 "params": { 00:23:54.256 "node_base": "iqn.2016-06.io.spdk", 00:23:54.256 "max_sessions": 128, 00:23:54.256 "max_connections_per_session": 2, 00:23:54.256 "max_queue_depth": 64, 00:23:54.256 "default_time2wait": 2, 00:23:54.256 "default_time2retain": 20, 00:23:54.256 "first_burst_length": 8192, 00:23:54.256 "immediate_data": true, 00:23:54.256 "allow_duplicated_isid": false, 00:23:54.256 "error_recovery_level": 0, 00:23:54.256 "nop_timeout": 60, 00:23:54.256 "nop_in_interval": 30, 00:23:54.256 "disable_chap": false, 00:23:54.256 "require_chap": false, 00:23:54.256 "mutual_chap": false, 00:23:54.256 "chap_group": 0, 00:23:54.256 "max_large_datain_per_connection": 64, 00:23:54.256 "max_r2t_per_connection": 4, 00:23:54.256 "pdu_pool_size": 36864, 00:23:54.256 "immediate_data_pool_size": 16384, 00:23:54.256 "data_out_pool_size": 2048 00:23:54.256 } 00:23:54.256 } 00:23:54.256 ] 00:23:54.256 } 00:23:54.256 ] 00:23:54.256 }' 00:23:54.256 10:15:24 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 75720 00:23:54.256 10:15:24 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75720 ']' 00:23:54.256 10:15:24 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75720 00:23:54.256 10:15:24 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:23:54.256 10:15:24 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:54.256 10:15:24 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75720 00:23:54.256 10:15:24 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:54.256 killing process with pid 75720 00:23:54.256 10:15:24 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:54.256 10:15:24 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75720' 00:23:54.256 10:15:24 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75720 00:23:54.256 10:15:24 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75720 00:23:56.161 [2024-12-09 10:15:26.599252] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:23:56.161 [2024-12-09 10:15:26.634030] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:23:56.161 [2024-12-09 10:15:26.634234] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:23:56.161 [2024-12-09 10:15:26.641908] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:23:56.161 [2024-12-09 10:15:26.641987] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:23:56.161 [2024-12-09 10:15:26.642022] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:23:56.161 [2024-12-09 10:15:26.642059] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:23:56.161 [2024-12-09 10:15:26.642249] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:23:58.064 10:15:28 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=75792 00:23:58.064 10:15:28 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 75792 00:23:58.064 10:15:28 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:23:58.064 10:15:28 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75792 ']' 00:23:58.064 10:15:28 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:58.064 10:15:28 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:58.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:58.064 10:15:28 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:58.064 10:15:28 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:58.064 10:15:28 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:23:58.064 10:15:28 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:23:58.064 "subsystems": [ 00:23:58.064 { 00:23:58.064 "subsystem": "fsdev", 00:23:58.064 "config": [ 00:23:58.064 { 00:23:58.064 "method": "fsdev_set_opts", 00:23:58.064 "params": { 00:23:58.064 "fsdev_io_pool_size": 65535, 00:23:58.064 "fsdev_io_cache_size": 256 00:23:58.064 } 00:23:58.064 } 00:23:58.064 ] 00:23:58.064 }, 00:23:58.064 { 00:23:58.064 "subsystem": "keyring", 00:23:58.064 "config": [] 00:23:58.064 }, 00:23:58.064 { 00:23:58.064 "subsystem": "iobuf", 00:23:58.064 "config": [ 00:23:58.064 { 00:23:58.064 "method": "iobuf_set_options", 00:23:58.064 "params": { 00:23:58.064 "small_pool_count": 8192, 00:23:58.064 "large_pool_count": 1024, 00:23:58.064 "small_bufsize": 8192, 00:23:58.064 "large_bufsize": 135168, 00:23:58.064 "enable_numa": false 00:23:58.064 } 00:23:58.064 } 00:23:58.064 ] 00:23:58.064 }, 00:23:58.064 { 00:23:58.064 "subsystem": "sock", 00:23:58.064 "config": [ 00:23:58.064 { 00:23:58.064 "method": "sock_set_default_impl", 00:23:58.064 "params": { 00:23:58.064 "impl_name": "posix" 00:23:58.064 } 00:23:58.064 }, 00:23:58.064 { 00:23:58.064 "method": "sock_impl_set_options", 00:23:58.064 "params": { 00:23:58.064 "impl_name": "ssl", 00:23:58.064 "recv_buf_size": 4096, 00:23:58.064 "send_buf_size": 4096, 00:23:58.064 "enable_recv_pipe": true, 00:23:58.064 "enable_quickack": false, 00:23:58.064 "enable_placement_id": 0, 00:23:58.064 "enable_zerocopy_send_server": true, 00:23:58.064 "enable_zerocopy_send_client": false, 00:23:58.064 "zerocopy_threshold": 0, 00:23:58.064 "tls_version": 0, 00:23:58.064 "enable_ktls": false 00:23:58.064 } 00:23:58.064 }, 00:23:58.064 { 00:23:58.064 "method": "sock_impl_set_options", 00:23:58.064 "params": { 00:23:58.064 "impl_name": "posix", 00:23:58.064 "recv_buf_size": 2097152, 00:23:58.064 "send_buf_size": 2097152, 00:23:58.064 "enable_recv_pipe": true, 00:23:58.064 "enable_quickack": false, 00:23:58.064 "enable_placement_id": 0, 00:23:58.064 "enable_zerocopy_send_server": true, 00:23:58.064 "enable_zerocopy_send_client": false, 00:23:58.064 "zerocopy_threshold": 0, 00:23:58.064 "tls_version": 0, 00:23:58.064 "enable_ktls": false 00:23:58.064 } 00:23:58.064 } 00:23:58.064 ] 00:23:58.064 }, 00:23:58.064 { 00:23:58.064 "subsystem": "vmd", 00:23:58.064 "config": [] 00:23:58.064 }, 00:23:58.064 { 00:23:58.064 "subsystem": "accel", 00:23:58.064 "config": [ 00:23:58.064 { 00:23:58.064 "method": "accel_set_options", 00:23:58.064 "params": { 00:23:58.064 "small_cache_size": 128, 00:23:58.064 "large_cache_size": 16, 00:23:58.064 "task_count": 2048, 00:23:58.064 "sequence_count": 2048, 00:23:58.064 "buf_count": 2048 00:23:58.064 } 00:23:58.064 } 00:23:58.064 ] 00:23:58.064 }, 00:23:58.064 { 00:23:58.064 "subsystem": "bdev", 00:23:58.064 "config": [ 00:23:58.064 { 00:23:58.064 "method": "bdev_set_options", 00:23:58.064 "params": { 00:23:58.064 "bdev_io_pool_size": 65535, 00:23:58.064 "bdev_io_cache_size": 256, 00:23:58.064 "bdev_auto_examine": true, 00:23:58.064 "iobuf_small_cache_size": 128, 00:23:58.064 "iobuf_large_cache_size": 16 00:23:58.064 } 00:23:58.064 }, 00:23:58.064 { 00:23:58.064 "method": "bdev_raid_set_options", 00:23:58.064 "params": { 00:23:58.064 "process_window_size_kb": 1024, 00:23:58.064 "process_max_bandwidth_mb_sec": 0 00:23:58.064 } 00:23:58.064 }, 00:23:58.064 { 00:23:58.064 "method": "bdev_iscsi_set_options", 00:23:58.064 "params": { 00:23:58.064 "timeout_sec": 30 00:23:58.064 } 00:23:58.064 }, 00:23:58.064 { 00:23:58.064 "method": "bdev_nvme_set_options", 00:23:58.064 "params": { 00:23:58.064 "action_on_timeout": "none", 00:23:58.064 "timeout_us": 0, 00:23:58.064 "timeout_admin_us": 0, 00:23:58.064 "keep_alive_timeout_ms": 10000, 00:23:58.064 "arbitration_burst": 0, 00:23:58.064 "low_priority_weight": 0, 00:23:58.064 "medium_priority_weight": 0, 00:23:58.064 "high_priority_weight": 0, 00:23:58.064 "nvme_adminq_poll_period_us": 10000, 00:23:58.064 "nvme_ioq_poll_period_us": 0, 00:23:58.064 "io_queue_requests": 0, 00:23:58.064 "delay_cmd_submit": true, 00:23:58.064 "transport_retry_count": 4, 00:23:58.064 "bdev_retry_count": 3, 00:23:58.064 "transport_ack_timeout": 0, 00:23:58.064 "ctrlr_loss_timeout_sec": 0, 00:23:58.064 "reconnect_delay_sec": 0, 00:23:58.064 "fast_io_fail_timeout_sec": 0, 00:23:58.064 "disable_auto_failback": false, 00:23:58.064 "generate_uuids": false, 00:23:58.064 "transport_tos": 0, 00:23:58.064 "nvme_error_stat": false, 00:23:58.064 "rdma_srq_size": 0, 00:23:58.064 "io_path_stat": false, 00:23:58.064 "allow_accel_sequence": false, 00:23:58.064 "rdma_max_cq_size": 0, 00:23:58.064 "rdma_cm_event_timeout_ms": 0, 00:23:58.064 "dhchap_digests": [ 00:23:58.064 "sha256", 00:23:58.064 "sha384", 00:23:58.064 "sha512" 00:23:58.064 ], 00:23:58.064 "dhchap_dhgroups": [ 00:23:58.064 "null", 00:23:58.064 "ffdhe2048", 00:23:58.064 "ffdhe3072", 00:23:58.064 "ffdhe4096", 00:23:58.064 "ffdhe6144", 00:23:58.064 "ffdhe8192" 00:23:58.064 ] 00:23:58.064 } 00:23:58.064 }, 00:23:58.064 { 00:23:58.064 "method": "bdev_nvme_set_hotplug", 00:23:58.064 "params": { 00:23:58.064 "period_us": 100000, 00:23:58.064 "enable": false 00:23:58.064 } 00:23:58.064 }, 00:23:58.064 { 00:23:58.064 "method": "bdev_malloc_create", 00:23:58.064 "params": { 00:23:58.064 "name": "malloc0", 00:23:58.064 "num_blocks": 8192, 00:23:58.064 "block_size": 4096, 00:23:58.064 "physical_block_size": 4096, 00:23:58.064 "uuid": "96535172-c381-4eb3-99ca-25813eafea11", 00:23:58.064 "optimal_io_boundary": 0, 00:23:58.064 "md_size": 0, 00:23:58.064 "dif_type": 0, 00:23:58.064 "dif_is_head_of_md": false, 00:23:58.064 "dif_pi_format": 0 00:23:58.064 } 00:23:58.064 }, 00:23:58.064 { 00:23:58.064 "method": "bdev_wait_for_examine" 00:23:58.064 } 00:23:58.064 ] 00:23:58.064 }, 00:23:58.064 { 00:23:58.064 "subsystem": "scsi", 00:23:58.064 "config": null 00:23:58.064 }, 00:23:58.064 { 00:23:58.064 "subsystem": "scheduler", 00:23:58.064 "config": [ 00:23:58.064 { 00:23:58.064 "method": "framework_set_scheduler", 00:23:58.064 "params": { 00:23:58.064 "name": "static" 00:23:58.064 } 00:23:58.064 } 00:23:58.064 ] 00:23:58.064 }, 00:23:58.064 { 00:23:58.065 "subsystem": "vhost_scsi", 00:23:58.065 "config": [] 00:23:58.065 }, 00:23:58.065 { 00:23:58.065 "subsystem": "vhost_blk", 00:23:58.065 "config": [] 00:23:58.065 }, 00:23:58.065 { 00:23:58.065 "subsystem": "ublk", 00:23:58.065 "config": [ 00:23:58.065 { 00:23:58.065 "method": "ublk_create_target", 00:23:58.065 "params": { 00:23:58.065 "cpumask": "1" 00:23:58.065 } 00:23:58.065 }, 00:23:58.065 { 00:23:58.065 "method": "ublk_start_disk", 00:23:58.065 "params": { 00:23:58.065 "bdev_name": "malloc0", 00:23:58.065 "ublk_id": 0, 00:23:58.065 "num_queues": 1, 00:23:58.065 "queue_depth": 128 00:23:58.065 } 00:23:58.065 } 00:23:58.065 ] 00:23:58.065 }, 00:23:58.065 { 00:23:58.065 "subsystem": "nbd", 00:23:58.065 "config": [] 00:23:58.065 }, 00:23:58.065 { 00:23:58.065 "subsystem": "nvmf", 00:23:58.065 "config": [ 00:23:58.065 { 00:23:58.065 "method": "nvmf_set_config", 00:23:58.065 "params": { 00:23:58.065 "discovery_filter": "match_any", 00:23:58.065 "admin_cmd_passthru": { 00:23:58.065 "identify_ctrlr": false 00:23:58.065 }, 00:23:58.065 "dhchap_digests": [ 00:23:58.065 "sha256", 00:23:58.065 "sha384", 00:23:58.065 "sha512" 00:23:58.065 ], 00:23:58.065 "dhchap_dhgroups": [ 00:23:58.065 "null", 00:23:58.065 "ffdhe2048", 00:23:58.065 "ffdhe3072", 00:23:58.065 "ffdhe4096", 00:23:58.065 "ffdhe6144", 00:23:58.065 "ffdhe8192" 00:23:58.065 ] 00:23:58.065 } 00:23:58.065 }, 00:23:58.065 { 00:23:58.065 "method": "nvmf_set_max_subsystems", 00:23:58.065 "params": { 00:23:58.065 "max_subsystems": 1024 00:23:58.065 } 00:23:58.065 }, 00:23:58.065 { 00:23:58.065 "method": "nvmf_set_crdt", 00:23:58.065 "params": { 00:23:58.065 "crdt1": 0, 00:23:58.065 "crdt2": 0, 00:23:58.065 "crdt3": 0 00:23:58.065 } 00:23:58.065 } 00:23:58.065 ] 00:23:58.065 }, 00:23:58.065 { 00:23:58.065 "subsystem": "iscsi", 00:23:58.065 "config": [ 00:23:58.065 { 00:23:58.065 "method": "iscsi_set_options", 00:23:58.065 "params": { 00:23:58.065 "node_base": "iqn.2016-06.io.spdk", 00:23:58.065 "max_sessions": 128, 00:23:58.065 "max_connections_per_session": 2, 00:23:58.065 "max_queue_depth": 64, 00:23:58.065 "default_time2wait": 2, 00:23:58.065 "default_time2retain": 20, 00:23:58.065 "first_burst_length": 8192, 00:23:58.065 "immediate_data": true, 00:23:58.065 "allow_duplicated_isid": false, 00:23:58.065 "error_recovery_level": 0, 00:23:58.065 "nop_timeout": 60, 00:23:58.065 "nop_in_interval": 30, 00:23:58.065 "disable_chap": false, 00:23:58.065 "require_chap": false, 00:23:58.065 "mutual_chap": false, 00:23:58.065 "chap_group": 0, 00:23:58.065 "max_large_datain_per_connection": 64, 00:23:58.065 "max_r2t_per_connection": 4, 00:23:58.065 "pdu_pool_size": 36864, 00:23:58.065 "immediate_data_pool_size": 16384, 00:23:58.065 "data_out_pool_size": 2048 00:23:58.065 } 00:23:58.065 } 00:23:58.065 ] 00:23:58.065 } 00:23:58.065 ] 00:23:58.065 }' 00:23:58.065 [2024-12-09 10:15:28.639458] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:23:58.065 [2024-12-09 10:15:28.639656] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75792 ] 00:23:58.065 [2024-12-09 10:15:28.819380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:58.323 [2024-12-09 10:15:28.961096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:59.698 [2024-12-09 10:15:30.095909] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:23:59.698 [2024-12-09 10:15:30.097338] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:23:59.698 [2024-12-09 10:15:30.103080] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:23:59.698 [2024-12-09 10:15:30.103228] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:23:59.698 [2024-12-09 10:15:30.103251] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:23:59.698 [2024-12-09 10:15:30.103261] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:23:59.698 [2024-12-09 10:15:30.110962] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:23:59.698 [2024-12-09 10:15:30.110995] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:23:59.698 [2024-12-09 10:15:30.117992] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:23:59.698 [2024-12-09 10:15:30.118176] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:23:59.698 [2024-12-09 10:15:30.134861] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:23:59.698 10:15:30 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:59.698 10:15:30 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:23:59.698 10:15:30 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:23:59.698 10:15:30 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.698 10:15:30 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:23:59.698 10:15:30 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:23:59.698 10:15:30 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.698 10:15:30 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:23:59.698 10:15:30 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:23:59.698 10:15:30 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 75792 00:23:59.698 10:15:30 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75792 ']' 00:23:59.698 10:15:30 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75792 00:23:59.698 10:15:30 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:23:59.698 10:15:30 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:59.698 10:15:30 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75792 00:23:59.698 10:15:30 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:59.698 killing process with pid 75792 00:23:59.698 10:15:30 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:59.698 10:15:30 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75792' 00:23:59.698 10:15:30 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75792 00:23:59.698 10:15:30 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75792 00:24:01.619 [2024-12-09 10:15:32.040718] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:24:01.619 [2024-12-09 10:15:32.080862] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:24:01.619 [2024-12-09 10:15:32.081036] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:24:01.619 [2024-12-09 10:15:32.087900] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:24:01.619 [2024-12-09 10:15:32.087963] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:24:01.619 [2024-12-09 10:15:32.087977] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:24:01.619 [2024-12-09 10:15:32.088015] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:24:01.619 [2024-12-09 10:15:32.088225] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:24:03.524 10:15:33 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:24:03.524 00:24:03.524 real 0m10.982s 00:24:03.524 user 0m7.940s 00:24:03.524 sys 0m4.093s 00:24:03.524 10:15:33 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:03.524 ************************************ 00:24:03.524 END TEST test_save_ublk_config 00:24:03.524 ************************************ 00:24:03.524 10:15:33 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:24:03.524 10:15:34 ublk -- ublk/ublk.sh@139 -- # spdk_pid=75883 00:24:03.524 10:15:34 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:24:03.524 10:15:34 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:03.524 10:15:34 ublk -- ublk/ublk.sh@141 -- # waitforlisten 75883 00:24:03.524 10:15:34 ublk -- common/autotest_common.sh@835 -- # '[' -z 75883 ']' 00:24:03.524 10:15:34 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:03.524 10:15:34 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:03.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:03.524 10:15:34 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:03.524 10:15:34 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:03.524 10:15:34 ublk -- common/autotest_common.sh@10 -- # set +x 00:24:03.524 [2024-12-09 10:15:34.144774] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:24:03.524 [2024-12-09 10:15:34.145013] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75883 ] 00:24:03.783 [2024-12-09 10:15:34.323964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:03.783 [2024-12-09 10:15:34.482297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:03.783 [2024-12-09 10:15:34.482322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:04.719 10:15:35 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:04.719 10:15:35 ublk -- common/autotest_common.sh@868 -- # return 0 00:24:04.719 10:15:35 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:24:04.719 10:15:35 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:04.719 10:15:35 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:04.719 10:15:35 ublk -- common/autotest_common.sh@10 -- # set +x 00:24:04.978 ************************************ 00:24:04.978 START TEST test_create_ublk 00:24:04.978 ************************************ 00:24:04.978 10:15:35 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:24:04.978 10:15:35 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:24:04.978 10:15:35 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.978 10:15:35 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:04.978 [2024-12-09 10:15:35.531920] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:24:04.978 [2024-12-09 10:15:35.539309] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:24:04.978 10:15:35 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.978 10:15:35 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:24:04.978 10:15:35 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:24:04.978 10:15:35 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.978 10:15:35 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:05.236 10:15:35 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.236 10:15:35 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:24:05.236 10:15:35 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:24:05.236 10:15:35 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.236 10:15:35 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:05.236 [2024-12-09 10:15:35.826203] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:24:05.236 [2024-12-09 10:15:35.826821] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:24:05.236 [2024-12-09 10:15:35.826880] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:24:05.236 [2024-12-09 10:15:35.826894] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:24:05.236 [2024-12-09 10:15:35.833889] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:24:05.236 [2024-12-09 10:15:35.833919] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:24:05.236 [2024-12-09 10:15:35.839937] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:24:05.236 [2024-12-09 10:15:35.840750] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:24:05.236 [2024-12-09 10:15:35.856002] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:24:05.236 10:15:35 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.236 10:15:35 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:24:05.236 10:15:35 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:24:05.236 10:15:35 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:24:05.236 10:15:35 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.236 10:15:35 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:05.236 10:15:35 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.236 10:15:35 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:24:05.236 { 00:24:05.236 "ublk_device": "/dev/ublkb0", 00:24:05.236 "id": 0, 00:24:05.236 "queue_depth": 512, 00:24:05.236 "num_queues": 4, 00:24:05.236 "bdev_name": "Malloc0" 00:24:05.236 } 00:24:05.236 ]' 00:24:05.236 10:15:35 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:24:05.236 10:15:35 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:24:05.236 10:15:35 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:24:05.236 10:15:35 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:24:05.236 10:15:35 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:24:05.236 10:15:36 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:24:05.550 10:15:36 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:24:05.550 10:15:36 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:24:05.550 10:15:36 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:24:05.550 10:15:36 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:24:05.550 10:15:36 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:24:05.550 10:15:36 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:24:05.550 10:15:36 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:24:05.550 10:15:36 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:24:05.550 10:15:36 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:24:05.550 10:15:36 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:24:05.550 10:15:36 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:24:05.550 10:15:36 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:24:05.550 10:15:36 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:24:05.550 10:15:36 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:24:05.550 10:15:36 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:24:05.550 10:15:36 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:24:05.550 fio: verification read phase will never start because write phase uses all of runtime 00:24:05.550 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:24:05.550 fio-3.35 00:24:05.550 Starting 1 process 00:24:17.770 00:24:17.770 fio_test: (groupid=0, jobs=1): err= 0: pid=75935: Mon Dec 9 10:15:46 2024 00:24:17.770 write: IOPS=10.5k, BW=41.0MiB/s (43.0MB/s)(410MiB/10001msec); 0 zone resets 00:24:17.770 clat (usec): min=60, max=10041, avg=93.83, stdev=164.25 00:24:17.770 lat (usec): min=60, max=10062, avg=94.57, stdev=164.28 00:24:17.770 clat percentiles (usec): 00:24:17.770 | 1.00th=[ 72], 5.00th=[ 74], 10.00th=[ 75], 20.00th=[ 76], 00:24:17.770 | 30.00th=[ 77], 40.00th=[ 79], 50.00th=[ 80], 60.00th=[ 82], 00:24:17.770 | 70.00th=[ 88], 80.00th=[ 93], 90.00th=[ 102], 95.00th=[ 110], 00:24:17.770 | 99.00th=[ 128], 99.50th=[ 149], 99.90th=[ 3163], 99.95th=[ 3654], 00:24:17.770 | 99.99th=[ 4146] 00:24:17.771 bw ( KiB/s): min=18323, max=43616, per=99.99%, avg=41980.42, stdev=5732.83, samples=19 00:24:17.771 iops : min= 4580, max=10904, avg=10495.05, stdev=1433.38, samples=19 00:24:17.771 lat (usec) : 100=88.46%, 250=11.10%, 500=0.01%, 750=0.02%, 1000=0.03% 00:24:17.771 lat (msec) : 2=0.12%, 4=0.24%, 10=0.02%, 20=0.01% 00:24:17.771 cpu : usr=2.95%, sys=7.90%, ctx=104972, majf=0, minf=797 00:24:17.771 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:17.771 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:17.771 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:17.771 issued rwts: total=0,104972,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:17.771 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:17.771 00:24:17.771 Run status group 0 (all jobs): 00:24:17.771 WRITE: bw=41.0MiB/s (43.0MB/s), 41.0MiB/s-41.0MiB/s (43.0MB/s-43.0MB/s), io=410MiB (430MB), run=10001-10001msec 00:24:17.771 00:24:17.771 Disk stats (read/write): 00:24:17.771 ublkb0: ios=0/103867, merge=0/0, ticks=0/8921, in_queue=8922, util=99.11% 00:24:17.771 10:15:46 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:24:17.771 10:15:46 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.771 10:15:46 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:17.771 [2024-12-09 10:15:46.393318] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:24:17.771 [2024-12-09 10:15:46.432913] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:24:17.771 [2024-12-09 10:15:46.433719] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:24:17.771 [2024-12-09 10:15:46.443965] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:24:17.771 [2024-12-09 10:15:46.444501] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:24:17.771 [2024-12-09 10:15:46.447851] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:24:17.771 10:15:46 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.771 10:15:46 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:24:17.771 10:15:46 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:24:17.771 10:15:46 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:24:17.771 10:15:46 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:17.771 10:15:46 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:17.771 10:15:46 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:17.771 10:15:46 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:17.771 10:15:46 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:24:17.771 10:15:46 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.771 10:15:46 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:17.771 [2024-12-09 10:15:46.457007] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:24:17.771 request: 00:24:17.771 { 00:24:17.771 "ublk_id": 0, 00:24:17.771 "method": "ublk_stop_disk", 00:24:17.771 "req_id": 1 00:24:17.771 } 00:24:17.771 Got JSON-RPC error response 00:24:17.771 response: 00:24:17.771 { 00:24:17.771 "code": -19, 00:24:17.771 "message": "No such device" 00:24:17.771 } 00:24:17.771 10:15:46 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:17.771 10:15:46 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:24:17.771 10:15:46 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:17.771 10:15:46 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:17.771 10:15:46 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:17.771 10:15:46 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:24:17.771 10:15:46 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.771 10:15:46 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:17.771 [2024-12-09 10:15:46.473037] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:24:17.771 [2024-12-09 10:15:46.480901] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:24:17.771 [2024-12-09 10:15:46.480969] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:24:17.771 10:15:46 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.771 10:15:46 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:17.771 10:15:46 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.771 10:15:46 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:17.771 10:15:47 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.771 10:15:47 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:24:17.771 10:15:47 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:24:17.771 10:15:47 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.771 10:15:47 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:17.771 10:15:47 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.771 10:15:47 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:24:17.771 10:15:47 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:24:17.771 10:15:47 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:24:17.771 10:15:47 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:24:17.771 10:15:47 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.771 10:15:47 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:17.771 10:15:47 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.771 10:15:47 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:24:17.771 10:15:47 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:24:17.771 ************************************ 00:24:17.771 END TEST test_create_ublk 00:24:17.771 ************************************ 00:24:17.771 10:15:47 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:24:17.771 00:24:17.771 real 0m11.751s 00:24:17.771 user 0m0.749s 00:24:17.771 sys 0m0.906s 00:24:17.771 10:15:47 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:17.771 10:15:47 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:17.771 10:15:47 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:24:17.771 10:15:47 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:17.771 10:15:47 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:17.771 10:15:47 ublk -- common/autotest_common.sh@10 -- # set +x 00:24:17.771 ************************************ 00:24:17.771 START TEST test_create_multi_ublk 00:24:17.771 ************************************ 00:24:17.771 10:15:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:24:17.771 10:15:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:24:17.771 10:15:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.771 10:15:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:17.771 [2024-12-09 10:15:47.340926] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:24:17.771 [2024-12-09 10:15:47.343984] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:24:17.771 10:15:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.771 10:15:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:24:17.771 10:15:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:24:17.771 10:15:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:17.771 10:15:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:24:17.771 10:15:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.771 10:15:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:17.771 10:15:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.771 10:15:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:24:17.771 10:15:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:24:17.771 10:15:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.771 10:15:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:17.771 [2024-12-09 10:15:47.654095] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:24:17.771 [2024-12-09 10:15:47.654653] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:24:17.771 [2024-12-09 10:15:47.654669] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:24:17.771 [2024-12-09 10:15:47.654704] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:24:17.771 [2024-12-09 10:15:47.666427] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:24:17.771 [2024-12-09 10:15:47.666460] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:24:17.771 [2024-12-09 10:15:47.671945] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:24:17.771 [2024-12-09 10:15:47.672831] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:24:17.771 [2024-12-09 10:15:47.687198] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:24:17.771 10:15:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.771 10:15:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:24:17.771 10:15:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:17.771 10:15:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:24:17.771 10:15:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.771 10:15:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:17.771 10:15:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.771 10:15:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:24:17.771 10:15:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:24:17.771 10:15:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.771 10:15:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:17.771 [2024-12-09 10:15:47.986080] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:24:17.771 [2024-12-09 10:15:47.986682] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:24:17.771 [2024-12-09 10:15:47.986714] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:24:17.771 [2024-12-09 10:15:47.986726] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:24:17.771 [2024-12-09 10:15:47.997909] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:24:17.771 [2024-12-09 10:15:47.997940] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:24:17.771 [2024-12-09 10:15:48.005869] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:24:17.772 [2024-12-09 10:15:48.006664] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:24:17.772 [2024-12-09 10:15:48.022925] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:24:17.772 10:15:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.772 10:15:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:24:17.772 10:15:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:17.772 10:15:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:24:17.772 10:15:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.772 10:15:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:17.772 10:15:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.772 10:15:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:24:17.772 10:15:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:24:17.772 10:15:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.772 10:15:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:17.772 [2024-12-09 10:15:48.314062] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:24:17.772 [2024-12-09 10:15:48.314625] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:24:17.772 [2024-12-09 10:15:48.314643] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:24:17.772 [2024-12-09 10:15:48.314655] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:24:17.772 [2024-12-09 10:15:48.318629] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:24:17.772 [2024-12-09 10:15:48.318662] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:24:17.772 [2024-12-09 10:15:48.327946] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:24:17.772 [2024-12-09 10:15:48.328823] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:24:17.772 [2024-12-09 10:15:48.333384] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:24:17.772 10:15:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.772 10:15:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:24:17.772 10:15:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:17.772 10:15:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:24:17.772 10:15:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.772 10:15:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:18.032 10:15:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.032 10:15:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:24:18.032 10:15:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:24:18.032 10:15:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.032 10:15:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:18.032 [2024-12-09 10:15:48.624153] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:24:18.032 [2024-12-09 10:15:48.624716] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:24:18.032 [2024-12-09 10:15:48.624735] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:24:18.032 [2024-12-09 10:15:48.624745] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:24:18.032 [2024-12-09 10:15:48.631896] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:24:18.032 [2024-12-09 10:15:48.631922] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:24:18.032 [2024-12-09 10:15:48.637977] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:24:18.032 [2024-12-09 10:15:48.638852] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:24:18.032 [2024-12-09 10:15:48.646975] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:24:18.032 10:15:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.032 10:15:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:24:18.032 10:15:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:24:18.032 10:15:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.032 10:15:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:18.032 10:15:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.032 10:15:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:24:18.032 { 00:24:18.032 "ublk_device": "/dev/ublkb0", 00:24:18.032 "id": 0, 00:24:18.032 "queue_depth": 512, 00:24:18.032 "num_queues": 4, 00:24:18.032 "bdev_name": "Malloc0" 00:24:18.032 }, 00:24:18.032 { 00:24:18.032 "ublk_device": "/dev/ublkb1", 00:24:18.032 "id": 1, 00:24:18.032 "queue_depth": 512, 00:24:18.032 "num_queues": 4, 00:24:18.032 "bdev_name": "Malloc1" 00:24:18.032 }, 00:24:18.032 { 00:24:18.032 "ublk_device": "/dev/ublkb2", 00:24:18.032 "id": 2, 00:24:18.032 "queue_depth": 512, 00:24:18.032 "num_queues": 4, 00:24:18.032 "bdev_name": "Malloc2" 00:24:18.032 }, 00:24:18.032 { 00:24:18.032 "ublk_device": "/dev/ublkb3", 00:24:18.032 "id": 3, 00:24:18.032 "queue_depth": 512, 00:24:18.032 "num_queues": 4, 00:24:18.032 "bdev_name": "Malloc3" 00:24:18.032 } 00:24:18.032 ]' 00:24:18.032 10:15:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:24:18.032 10:15:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:18.032 10:15:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:24:18.032 10:15:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:24:18.032 10:15:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:24:18.032 10:15:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:24:18.032 10:15:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:24:18.291 10:15:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:24:18.291 10:15:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:24:18.291 10:15:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:24:18.291 10:15:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:24:18.291 10:15:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:24:18.291 10:15:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:18.291 10:15:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:24:18.291 10:15:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:24:18.291 10:15:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:24:18.291 10:15:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:24:18.291 10:15:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:24:18.550 10:15:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:24:18.550 10:15:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:24:18.550 10:15:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:24:18.550 10:15:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:24:18.550 10:15:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:24:18.550 10:15:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:18.550 10:15:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:24:18.550 10:15:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:24:18.550 10:15:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:24:18.550 10:15:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:24:18.550 10:15:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:24:18.808 10:15:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:24:18.808 10:15:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:24:18.808 10:15:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:24:18.808 10:15:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:24:18.808 10:15:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:24:18.808 10:15:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:18.808 10:15:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:24:18.808 10:15:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:24:18.808 10:15:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:24:18.808 10:15:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:24:18.808 10:15:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:24:19.067 10:15:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:24:19.067 10:15:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:24:19.067 10:15:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:24:19.067 10:15:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:24:19.067 10:15:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:24:19.067 10:15:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:24:19.067 10:15:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:24:19.067 10:15:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:19.067 10:15:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:24:19.067 10:15:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.067 10:15:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:19.067 [2024-12-09 10:15:49.756270] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:24:19.067 [2024-12-09 10:15:49.792619] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:24:19.067 [2024-12-09 10:15:49.793812] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:24:19.067 [2024-12-09 10:15:49.800940] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:24:19.067 [2024-12-09 10:15:49.801262] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:24:19.067 [2024-12-09 10:15:49.801282] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:24:19.067 10:15:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.067 10:15:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:19.067 10:15:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:24:19.067 10:15:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.067 10:15:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:19.067 [2024-12-09 10:15:49.816048] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:24:19.067 [2024-12-09 10:15:49.862902] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:24:19.067 [2024-12-09 10:15:49.864024] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:24:19.325 [2024-12-09 10:15:49.873876] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:24:19.325 [2024-12-09 10:15:49.874220] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:24:19.325 [2024-12-09 10:15:49.874240] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:24:19.325 10:15:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.325 10:15:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:19.325 10:15:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:24:19.325 10:15:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.325 10:15:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:19.325 [2024-12-09 10:15:49.889069] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:24:19.325 [2024-12-09 10:15:49.934963] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:24:19.325 [2024-12-09 10:15:49.936005] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:24:19.325 [2024-12-09 10:15:49.944867] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:24:19.325 [2024-12-09 10:15:49.945211] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:24:19.325 [2024-12-09 10:15:49.945232] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:24:19.325 10:15:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.325 10:15:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:19.325 10:15:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:24:19.325 10:15:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.325 10:15:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:19.325 [2024-12-09 10:15:49.952016] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:24:19.325 [2024-12-09 10:15:49.997923] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:24:19.325 [2024-12-09 10:15:49.998843] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:24:19.325 [2024-12-09 10:15:50.006947] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:24:19.325 [2024-12-09 10:15:50.007259] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:24:19.325 [2024-12-09 10:15:50.007278] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:24:19.325 10:15:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.325 10:15:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:24:19.584 [2024-12-09 10:15:50.288888] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:24:19.584 [2024-12-09 10:15:50.295907] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:24:19.584 [2024-12-09 10:15:50.295968] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:24:19.584 10:15:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:24:19.584 10:15:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:19.584 10:15:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:19.584 10:15:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.584 10:15:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:20.521 10:15:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.521 10:15:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:20.521 10:15:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:24:20.521 10:15:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.521 10:15:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:20.521 10:15:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.521 10:15:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:20.521 10:15:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:24:20.521 10:15:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.521 10:15:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:21.088 10:15:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.088 10:15:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:21.088 10:15:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:24:21.088 10:15:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.088 10:15:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:21.347 10:15:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.347 10:15:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:24:21.347 10:15:51 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:24:21.347 10:15:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.347 10:15:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:21.347 10:15:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.347 10:15:51 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:24:21.347 10:15:51 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:24:21.347 10:15:52 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:24:21.347 10:15:52 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:24:21.347 10:15:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.347 10:15:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:21.347 10:15:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.347 10:15:52 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:24:21.347 10:15:52 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:24:21.347 10:15:52 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:24:21.347 00:24:21.347 real 0m4.771s 00:24:21.347 user 0m1.358s 00:24:21.347 sys 0m0.176s 00:24:21.347 ************************************ 00:24:21.347 END TEST test_create_multi_ublk 00:24:21.347 10:15:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:21.347 10:15:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:21.347 ************************************ 00:24:21.347 10:15:52 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:24:21.347 10:15:52 ublk -- ublk/ublk.sh@147 -- # cleanup 00:24:21.347 10:15:52 ublk -- ublk/ublk.sh@130 -- # killprocess 75883 00:24:21.347 10:15:52 ublk -- common/autotest_common.sh@954 -- # '[' -z 75883 ']' 00:24:21.347 10:15:52 ublk -- common/autotest_common.sh@958 -- # kill -0 75883 00:24:21.347 10:15:52 ublk -- common/autotest_common.sh@959 -- # uname 00:24:21.606 10:15:52 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:21.606 10:15:52 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75883 00:24:21.606 10:15:52 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:21.606 killing process with pid 75883 00:24:21.606 10:15:52 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:21.606 10:15:52 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75883' 00:24:21.606 10:15:52 ublk -- common/autotest_common.sh@973 -- # kill 75883 00:24:21.606 10:15:52 ublk -- common/autotest_common.sh@978 -- # wait 75883 00:24:22.543 [2024-12-09 10:15:53.276111] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:24:22.543 [2024-12-09 10:15:53.276247] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:24:23.920 00:24:23.920 real 0m31.831s 00:24:23.920 user 0m44.666s 00:24:23.920 sys 0m11.523s 00:24:23.920 10:15:54 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:23.920 ************************************ 00:24:23.920 END TEST ublk 00:24:23.920 ************************************ 00:24:23.920 10:15:54 ublk -- common/autotest_common.sh@10 -- # set +x 00:24:23.920 10:15:54 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:24:23.920 10:15:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:23.920 10:15:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:23.920 10:15:54 -- common/autotest_common.sh@10 -- # set +x 00:24:23.920 ************************************ 00:24:23.920 START TEST ublk_recovery 00:24:23.920 ************************************ 00:24:23.920 10:15:54 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:24:24.179 * Looking for test storage... 00:24:24.179 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:24:24.179 10:15:54 ublk_recovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:24.179 10:15:54 ublk_recovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:24.179 10:15:54 ublk_recovery -- common/autotest_common.sh@1711 -- # lcov --version 00:24:24.179 10:15:54 ublk_recovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:24.179 10:15:54 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:24.179 10:15:54 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:24.179 10:15:54 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:24.179 10:15:54 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:24:24.179 10:15:54 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:24:24.179 10:15:54 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:24:24.179 10:15:54 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:24:24.179 10:15:54 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:24:24.179 10:15:54 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:24:24.179 10:15:54 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:24:24.179 10:15:54 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:24.179 10:15:54 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:24:24.179 10:15:54 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:24:24.179 10:15:54 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:24.179 10:15:54 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:24.179 10:15:54 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:24:24.179 10:15:54 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:24:24.179 10:15:54 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:24.179 10:15:54 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:24:24.179 10:15:54 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:24:24.179 10:15:54 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:24:24.179 10:15:54 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:24:24.179 10:15:54 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:24.179 10:15:54 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:24:24.179 10:15:54 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:24:24.179 10:15:54 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:24.179 10:15:54 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:24.179 10:15:54 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:24:24.179 10:15:54 ublk_recovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:24.179 10:15:54 ublk_recovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:24.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:24.179 --rc genhtml_branch_coverage=1 00:24:24.179 --rc genhtml_function_coverage=1 00:24:24.179 --rc genhtml_legend=1 00:24:24.179 --rc geninfo_all_blocks=1 00:24:24.179 --rc geninfo_unexecuted_blocks=1 00:24:24.179 00:24:24.179 ' 00:24:24.179 10:15:54 ublk_recovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:24.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:24.180 --rc genhtml_branch_coverage=1 00:24:24.180 --rc genhtml_function_coverage=1 00:24:24.180 --rc genhtml_legend=1 00:24:24.180 --rc geninfo_all_blocks=1 00:24:24.180 --rc geninfo_unexecuted_blocks=1 00:24:24.180 00:24:24.180 ' 00:24:24.180 10:15:54 ublk_recovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:24.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:24.180 --rc genhtml_branch_coverage=1 00:24:24.180 --rc genhtml_function_coverage=1 00:24:24.180 --rc genhtml_legend=1 00:24:24.180 --rc geninfo_all_blocks=1 00:24:24.180 --rc geninfo_unexecuted_blocks=1 00:24:24.180 00:24:24.180 ' 00:24:24.180 10:15:54 ublk_recovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:24.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:24.180 --rc genhtml_branch_coverage=1 00:24:24.180 --rc genhtml_function_coverage=1 00:24:24.180 --rc genhtml_legend=1 00:24:24.180 --rc geninfo_all_blocks=1 00:24:24.180 --rc geninfo_unexecuted_blocks=1 00:24:24.180 00:24:24.180 ' 00:24:24.180 10:15:54 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:24:24.180 10:15:54 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:24:24.180 10:15:54 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:24:24.180 10:15:54 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:24:24.180 10:15:54 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:24:24.180 10:15:54 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:24:24.180 10:15:54 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:24:24.180 10:15:54 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:24:24.180 10:15:54 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:24:24.180 10:15:54 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:24:24.180 10:15:54 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=76303 00:24:24.180 10:15:54 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:24:24.180 10:15:54 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:24.180 10:15:54 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 76303 00:24:24.180 10:15:54 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 76303 ']' 00:24:24.180 10:15:54 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:24.180 10:15:54 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:24.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:24.180 10:15:54 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:24.180 10:15:54 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:24.180 10:15:54 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:24:24.444 [2024-12-09 10:15:55.017525] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:24:24.444 [2024-12-09 10:15:55.018417] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76303 ] 00:24:24.444 [2024-12-09 10:15:55.212630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:24.705 [2024-12-09 10:15:55.359992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:24.705 [2024-12-09 10:15:55.360064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:25.641 10:15:56 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:25.641 10:15:56 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:24:25.641 10:15:56 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:24:25.641 10:15:56 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.641 10:15:56 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:24:25.641 [2024-12-09 10:15:56.317937] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:24:25.641 [2024-12-09 10:15:56.321171] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:24:25.641 10:15:56 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.641 10:15:56 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:24:25.641 10:15:56 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.641 10:15:56 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:24:25.899 malloc0 00:24:25.899 10:15:56 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.899 10:15:56 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:24:25.899 10:15:56 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.899 10:15:56 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:24:25.899 [2024-12-09 10:15:56.482129] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:24:25.899 [2024-12-09 10:15:56.482274] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:24:25.899 [2024-12-09 10:15:56.482295] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:24:25.899 [2024-12-09 10:15:56.482306] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:24:25.899 [2024-12-09 10:15:56.489903] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:24:25.899 [2024-12-09 10:15:56.489934] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:24:25.899 [2024-12-09 10:15:56.497877] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:24:25.899 [2024-12-09 10:15:56.498080] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:24:25.899 [2024-12-09 10:15:56.521891] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:24:25.899 1 00:24:25.899 10:15:56 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.899 10:15:56 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:24:26.836 10:15:57 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=76344 00:24:26.836 10:15:57 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:24:26.836 10:15:57 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:24:27.094 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:24:27.094 fio-3.35 00:24:27.094 Starting 1 process 00:24:32.433 10:16:02 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 76303 00:24:32.433 10:16:02 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:24:37.706 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 76303 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:24:37.706 10:16:07 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=76455 00:24:37.706 10:16:07 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:24:37.706 10:16:07 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:37.706 10:16:07 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 76455 00:24:37.706 10:16:07 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 76455 ']' 00:24:37.706 10:16:07 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:37.706 10:16:07 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:37.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:37.706 10:16:07 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:37.706 10:16:07 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:37.706 10:16:07 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:24:37.706 [2024-12-09 10:16:07.695356] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:24:37.706 [2024-12-09 10:16:07.695609] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76455 ] 00:24:37.706 [2024-12-09 10:16:07.893505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:37.706 [2024-12-09 10:16:08.070068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:37.706 [2024-12-09 10:16:08.070106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:38.643 10:16:09 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:38.643 10:16:09 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:24:38.643 10:16:09 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:24:38.643 10:16:09 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.643 10:16:09 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:24:38.643 [2024-12-09 10:16:09.122950] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:24:38.643 [2024-12-09 10:16:09.126238] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:24:38.643 10:16:09 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.643 10:16:09 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:24:38.643 10:16:09 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.643 10:16:09 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:24:38.643 malloc0 00:24:38.643 10:16:09 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.643 10:16:09 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:24:38.643 10:16:09 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.643 10:16:09 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:24:38.643 [2024-12-09 10:16:09.282078] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:24:38.643 [2024-12-09 10:16:09.282135] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:24:38.643 [2024-12-09 10:16:09.282153] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:24:38.643 [2024-12-09 10:16:09.289959] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:24:38.643 [2024-12-09 10:16:09.289991] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:24:38.643 1 00:24:38.643 10:16:09 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.643 10:16:09 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 76344 00:24:39.579 [2024-12-09 10:16:10.290949] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:24:39.579 [2024-12-09 10:16:10.298872] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:24:39.579 [2024-12-09 10:16:10.298900] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:24:40.515 [2024-12-09 10:16:11.298968] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:24:40.515 [2024-12-09 10:16:11.304943] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:24:40.515 [2024-12-09 10:16:11.305003] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:24:41.891 [2024-12-09 10:16:12.305085] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:24:41.891 [2024-12-09 10:16:12.313002] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:24:41.891 [2024-12-09 10:16:12.313101] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:24:41.891 [2024-12-09 10:16:12.313117] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:24:41.891 [2024-12-09 10:16:12.313317] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:25:03.825 [2024-12-09 10:16:33.031892] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:25:03.825 [2024-12-09 10:16:33.039625] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:25:03.825 [2024-12-09 10:16:33.046154] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:25:03.825 [2024-12-09 10:16:33.046183] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:25:30.366 00:25:30.366 fio_test: (groupid=0, jobs=1): err= 0: pid=76351: Mon Dec 9 10:16:57 2024 00:25:30.366 read: IOPS=8684, BW=33.9MiB/s (35.6MB/s)(2036MiB/60003msec) 00:25:30.366 slat (nsec): min=1682, max=397700, avg=6917.96, stdev=4239.95 00:25:30.366 clat (usec): min=1289, max=30515k, avg=7597.82, stdev=351063.78 00:25:30.366 lat (usec): min=1295, max=30515k, avg=7604.74, stdev=351063.77 00:25:30.366 clat percentiles (msec): 00:25:30.366 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 4], 20.00th=[ 4], 00:25:30.366 | 30.00th=[ 4], 40.00th=[ 4], 50.00th=[ 4], 60.00th=[ 4], 00:25:30.366 | 70.00th=[ 4], 80.00th=[ 4], 90.00th=[ 4], 95.00th=[ 5], 00:25:30.366 | 99.00th=[ 7], 99.50th=[ 7], 99.90th=[ 9], 99.95th=[ 14], 00:25:30.366 | 99.99th=[17113] 00:25:30.366 bw ( KiB/s): min=21512, max=80344, per=100.00%, avg=69597.68, stdev=9775.21, samples=59 00:25:30.366 iops : min= 5378, max=20086, avg=17399.41, stdev=2443.83, samples=59 00:25:30.366 write: IOPS=8671, BW=33.9MiB/s (35.5MB/s)(2032MiB/60003msec); 0 zone resets 00:25:30.366 slat (nsec): min=1862, max=691635, avg=7182.45, stdev=4488.94 00:25:30.366 clat (usec): min=1070, max=30515k, avg=7135.42, stdev=324879.19 00:25:30.366 lat (usec): min=1075, max=30515k, avg=7142.60, stdev=324879.17 00:25:30.366 clat percentiles (msec): 00:25:30.366 | 1.00th=[ 3], 5.00th=[ 4], 10.00th=[ 4], 20.00th=[ 4], 00:25:30.366 | 30.00th=[ 4], 40.00th=[ 4], 50.00th=[ 4], 60.00th=[ 4], 00:25:30.366 | 70.00th=[ 4], 80.00th=[ 4], 90.00th=[ 5], 95.00th=[ 5], 00:25:30.366 | 99.00th=[ 7], 99.50th=[ 8], 99.90th=[ 9], 99.95th=[ 14], 00:25:30.366 | 99.99th=[17113] 00:25:30.366 bw ( KiB/s): min=21808, max=81832, per=100.00%, avg=69484.71, stdev=9640.60, samples=59 00:25:30.366 iops : min= 5452, max=20458, avg=17371.15, stdev=2410.19, samples=59 00:25:30.366 lat (msec) : 2=0.02%, 4=87.75%, 10=12.15%, 20=0.07%, >=2000=0.01% 00:25:30.366 cpu : usr=5.27%, sys=11.34%, ctx=33872, majf=0, minf=14 00:25:30.366 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:25:30.366 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:30.366 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:30.366 issued rwts: total=521102,520307,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:30.366 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:30.366 00:25:30.366 Run status group 0 (all jobs): 00:25:30.366 READ: bw=33.9MiB/s (35.6MB/s), 33.9MiB/s-33.9MiB/s (35.6MB/s-35.6MB/s), io=2036MiB (2134MB), run=60003-60003msec 00:25:30.366 WRITE: bw=33.9MiB/s (35.5MB/s), 33.9MiB/s-33.9MiB/s (35.5MB/s-35.5MB/s), io=2032MiB (2131MB), run=60003-60003msec 00:25:30.366 00:25:30.366 Disk stats (read/write): 00:25:30.366 ublkb1: ios=518994/518137, merge=0/0, ticks=3900484/3589620, in_queue=7490105, util=99.96% 00:25:30.366 10:16:57 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:25:30.366 10:16:57 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.366 10:16:57 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.366 [2024-12-09 10:16:57.822752] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:25:30.366 [2024-12-09 10:16:57.854055] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:25:30.366 [2024-12-09 10:16:57.854327] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:25:30.366 [2024-12-09 10:16:57.861967] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:25:30.366 [2024-12-09 10:16:57.862168] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:25:30.366 [2024-12-09 10:16:57.862183] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:25:30.366 10:16:57 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.366 10:16:57 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:25:30.366 10:16:57 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.366 10:16:57 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.367 [2024-12-09 10:16:57.876101] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:25:30.367 [2024-12-09 10:16:57.883993] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:25:30.367 [2024-12-09 10:16:57.884076] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:25:30.367 10:16:57 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.367 10:16:57 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:25:30.367 10:16:57 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:25:30.367 10:16:57 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 76455 00:25:30.367 10:16:57 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 76455 ']' 00:25:30.367 10:16:57 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 76455 00:25:30.367 10:16:57 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:25:30.367 10:16:57 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:30.367 10:16:57 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76455 00:25:30.367 10:16:57 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:30.367 10:16:57 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:30.367 killing process with pid 76455 00:25:30.367 10:16:57 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76455' 00:25:30.367 10:16:57 ublk_recovery -- common/autotest_common.sh@973 -- # kill 76455 00:25:30.367 10:16:57 ublk_recovery -- common/autotest_common.sh@978 -- # wait 76455 00:25:30.367 [2024-12-09 10:16:59.563914] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:25:30.367 [2024-12-09 10:16:59.563991] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:25:30.367 00:25:30.367 real 1m6.461s 00:25:30.367 user 1m50.849s 00:25:30.367 sys 0m20.922s 00:25:30.367 10:17:01 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:30.367 ************************************ 00:25:30.367 END TEST ublk_recovery 00:25:30.367 10:17:01 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.367 ************************************ 00:25:30.625 10:17:01 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:25:30.625 10:17:01 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:25:30.625 10:17:01 -- spdk/autotest.sh@260 -- # timing_exit lib 00:25:30.625 10:17:01 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:30.625 10:17:01 -- common/autotest_common.sh@10 -- # set +x 00:25:30.625 10:17:01 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:25:30.625 10:17:01 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:25:30.625 10:17:01 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:25:30.625 10:17:01 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:25:30.625 10:17:01 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:25:30.625 10:17:01 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:25:30.625 10:17:01 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:25:30.625 10:17:01 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:25:30.625 10:17:01 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:25:30.625 10:17:01 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:25:30.625 10:17:01 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:25:30.625 10:17:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:30.625 10:17:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:30.625 10:17:01 -- common/autotest_common.sh@10 -- # set +x 00:25:30.625 ************************************ 00:25:30.625 START TEST ftl 00:25:30.625 ************************************ 00:25:30.625 10:17:01 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:25:30.625 * Looking for test storage... 00:25:30.625 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:30.625 10:17:01 ftl -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:30.625 10:17:01 ftl -- common/autotest_common.sh@1711 -- # lcov --version 00:25:30.625 10:17:01 ftl -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:30.625 10:17:01 ftl -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:30.625 10:17:01 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:30.626 10:17:01 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:30.626 10:17:01 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:30.626 10:17:01 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:25:30.626 10:17:01 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:25:30.626 10:17:01 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:25:30.626 10:17:01 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:25:30.626 10:17:01 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:25:30.626 10:17:01 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:25:30.626 10:17:01 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:25:30.626 10:17:01 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:30.626 10:17:01 ftl -- scripts/common.sh@344 -- # case "$op" in 00:25:30.626 10:17:01 ftl -- scripts/common.sh@345 -- # : 1 00:25:30.626 10:17:01 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:30.626 10:17:01 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:30.626 10:17:01 ftl -- scripts/common.sh@365 -- # decimal 1 00:25:30.626 10:17:01 ftl -- scripts/common.sh@353 -- # local d=1 00:25:30.626 10:17:01 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:30.626 10:17:01 ftl -- scripts/common.sh@355 -- # echo 1 00:25:30.626 10:17:01 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:25:30.626 10:17:01 ftl -- scripts/common.sh@366 -- # decimal 2 00:25:30.626 10:17:01 ftl -- scripts/common.sh@353 -- # local d=2 00:25:30.626 10:17:01 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:30.626 10:17:01 ftl -- scripts/common.sh@355 -- # echo 2 00:25:30.626 10:17:01 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:25:30.626 10:17:01 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:30.626 10:17:01 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:30.626 10:17:01 ftl -- scripts/common.sh@368 -- # return 0 00:25:30.626 10:17:01 ftl -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:30.626 10:17:01 ftl -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:30.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:30.626 --rc genhtml_branch_coverage=1 00:25:30.626 --rc genhtml_function_coverage=1 00:25:30.626 --rc genhtml_legend=1 00:25:30.626 --rc geninfo_all_blocks=1 00:25:30.626 --rc geninfo_unexecuted_blocks=1 00:25:30.626 00:25:30.626 ' 00:25:30.626 10:17:01 ftl -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:30.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:30.626 --rc genhtml_branch_coverage=1 00:25:30.626 --rc genhtml_function_coverage=1 00:25:30.626 --rc genhtml_legend=1 00:25:30.626 --rc geninfo_all_blocks=1 00:25:30.626 --rc geninfo_unexecuted_blocks=1 00:25:30.626 00:25:30.626 ' 00:25:30.626 10:17:01 ftl -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:30.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:30.626 --rc genhtml_branch_coverage=1 00:25:30.626 --rc genhtml_function_coverage=1 00:25:30.626 --rc genhtml_legend=1 00:25:30.626 --rc geninfo_all_blocks=1 00:25:30.626 --rc geninfo_unexecuted_blocks=1 00:25:30.626 00:25:30.626 ' 00:25:30.626 10:17:01 ftl -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:30.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:30.626 --rc genhtml_branch_coverage=1 00:25:30.626 --rc genhtml_function_coverage=1 00:25:30.626 --rc genhtml_legend=1 00:25:30.626 --rc geninfo_all_blocks=1 00:25:30.626 --rc geninfo_unexecuted_blocks=1 00:25:30.626 00:25:30.626 ' 00:25:30.626 10:17:01 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:30.626 10:17:01 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:25:30.626 10:17:01 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:30.884 10:17:01 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:30.884 10:17:01 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:30.884 10:17:01 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:30.884 10:17:01 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:30.884 10:17:01 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:30.884 10:17:01 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:30.884 10:17:01 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:30.884 10:17:01 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:30.884 10:17:01 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:30.884 10:17:01 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:30.884 10:17:01 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:30.884 10:17:01 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:30.884 10:17:01 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:30.884 10:17:01 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:30.884 10:17:01 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:30.884 10:17:01 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:30.884 10:17:01 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:30.884 10:17:01 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:30.884 10:17:01 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:30.884 10:17:01 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:30.884 10:17:01 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:30.884 10:17:01 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:30.884 10:17:01 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:30.884 10:17:01 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:30.884 10:17:01 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:30.884 10:17:01 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:30.884 10:17:01 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:30.884 10:17:01 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:25:30.884 10:17:01 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:25:30.884 10:17:01 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:25:30.884 10:17:01 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:25:30.884 10:17:01 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:31.212 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:31.212 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:31.212 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:31.212 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:31.212 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:31.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:31.471 10:17:01 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=77246 00:25:31.471 10:17:01 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:25:31.471 10:17:01 ftl -- ftl/ftl.sh@38 -- # waitforlisten 77246 00:25:31.471 10:17:01 ftl -- common/autotest_common.sh@835 -- # '[' -z 77246 ']' 00:25:31.471 10:17:01 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:31.471 10:17:01 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:31.472 10:17:01 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:31.472 10:17:01 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:31.472 10:17:01 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:31.472 [2024-12-09 10:17:02.135086] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:25:31.472 [2024-12-09 10:17:02.135286] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77246 ] 00:25:31.730 [2024-12-09 10:17:02.331915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:31.730 [2024-12-09 10:17:02.511900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:32.665 10:17:03 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:32.665 10:17:03 ftl -- common/autotest_common.sh@868 -- # return 0 00:25:32.665 10:17:03 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:25:32.923 10:17:03 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:25:34.300 10:17:04 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:25:34.300 10:17:04 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:25:34.557 10:17:05 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:25:34.557 10:17:05 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:25:34.557 10:17:05 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:25:34.815 10:17:05 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:25:34.815 10:17:05 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:25:34.815 10:17:05 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:25:34.815 10:17:05 ftl -- ftl/ftl.sh@50 -- # break 00:25:34.815 10:17:05 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:25:34.815 10:17:05 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:25:34.815 10:17:05 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:25:34.815 10:17:05 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:25:35.074 10:17:05 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:25:35.074 10:17:05 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:25:35.074 10:17:05 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:25:35.074 10:17:05 ftl -- ftl/ftl.sh@63 -- # break 00:25:35.074 10:17:05 ftl -- ftl/ftl.sh@66 -- # killprocess 77246 00:25:35.074 10:17:05 ftl -- common/autotest_common.sh@954 -- # '[' -z 77246 ']' 00:25:35.074 10:17:05 ftl -- common/autotest_common.sh@958 -- # kill -0 77246 00:25:35.074 10:17:05 ftl -- common/autotest_common.sh@959 -- # uname 00:25:35.074 10:17:05 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:35.074 10:17:05 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77246 00:25:35.074 10:17:05 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:35.074 10:17:05 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:35.074 killing process with pid 77246 00:25:35.074 10:17:05 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77246' 00:25:35.074 10:17:05 ftl -- common/autotest_common.sh@973 -- # kill 77246 00:25:35.074 10:17:05 ftl -- common/autotest_common.sh@978 -- # wait 77246 00:25:37.608 10:17:08 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:25:37.608 10:17:08 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:25:37.608 10:17:08 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:25:37.608 10:17:08 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:37.608 10:17:08 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:37.608 ************************************ 00:25:37.608 START TEST ftl_fio_basic 00:25:37.608 ************************************ 00:25:37.608 10:17:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:25:37.608 * Looking for test storage... 00:25:37.608 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:37.608 10:17:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:37.608 10:17:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lcov --version 00:25:37.608 10:17:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:37.608 10:17:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:37.608 10:17:08 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:37.608 10:17:08 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:37.608 10:17:08 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:37.608 10:17:08 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:25:37.608 10:17:08 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:25:37.608 10:17:08 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:25:37.608 10:17:08 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:25:37.608 10:17:08 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:25:37.608 10:17:08 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:25:37.608 10:17:08 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:25:37.608 10:17:08 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:37.608 10:17:08 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:25:37.608 10:17:08 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:25:37.608 10:17:08 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:37.608 10:17:08 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:37.608 10:17:08 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:25:37.608 10:17:08 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:25:37.608 10:17:08 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:37.608 10:17:08 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:25:37.608 10:17:08 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:25:37.608 10:17:08 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:25:37.608 10:17:08 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:25:37.608 10:17:08 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:37.608 10:17:08 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:25:37.608 10:17:08 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:25:37.608 10:17:08 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:37.608 10:17:08 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:37.608 10:17:08 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:25:37.608 10:17:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:37.608 10:17:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:37.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:37.608 --rc genhtml_branch_coverage=1 00:25:37.608 --rc genhtml_function_coverage=1 00:25:37.608 --rc genhtml_legend=1 00:25:37.608 --rc geninfo_all_blocks=1 00:25:37.608 --rc geninfo_unexecuted_blocks=1 00:25:37.608 00:25:37.608 ' 00:25:37.608 10:17:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:37.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:37.609 --rc genhtml_branch_coverage=1 00:25:37.609 --rc genhtml_function_coverage=1 00:25:37.609 --rc genhtml_legend=1 00:25:37.609 --rc geninfo_all_blocks=1 00:25:37.609 --rc geninfo_unexecuted_blocks=1 00:25:37.609 00:25:37.609 ' 00:25:37.609 10:17:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:37.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:37.609 --rc genhtml_branch_coverage=1 00:25:37.609 --rc genhtml_function_coverage=1 00:25:37.609 --rc genhtml_legend=1 00:25:37.609 --rc geninfo_all_blocks=1 00:25:37.609 --rc geninfo_unexecuted_blocks=1 00:25:37.609 00:25:37.609 ' 00:25:37.609 10:17:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:37.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:37.609 --rc genhtml_branch_coverage=1 00:25:37.609 --rc genhtml_function_coverage=1 00:25:37.609 --rc genhtml_legend=1 00:25:37.609 --rc geninfo_all_blocks=1 00:25:37.609 --rc geninfo_unexecuted_blocks=1 00:25:37.609 00:25:37.609 ' 00:25:37.609 10:17:08 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:37.609 10:17:08 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:25:37.609 10:17:08 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:37.609 10:17:08 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:37.609 10:17:08 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:37.609 10:17:08 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:37.609 10:17:08 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:37.609 10:17:08 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:37.609 10:17:08 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:37.609 10:17:08 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:37.609 10:17:08 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:37.609 10:17:08 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:37.609 10:17:08 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:37.609 10:17:08 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:37.609 10:17:08 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:37.609 10:17:08 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:37.609 10:17:08 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:37.609 10:17:08 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:37.609 10:17:08 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:37.609 10:17:08 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:37.609 10:17:08 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:37.609 10:17:08 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:37.609 10:17:08 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:37.609 10:17:08 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:37.609 10:17:08 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:37.609 10:17:08 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:37.609 10:17:08 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:37.609 10:17:08 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:37.609 10:17:08 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:37.609 10:17:08 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:25:37.609 10:17:08 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:25:37.609 10:17:08 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:25:37.609 10:17:08 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:25:37.609 10:17:08 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:37.609 10:17:08 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:25:37.609 10:17:08 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:25:37.609 10:17:08 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:25:37.609 10:17:08 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:25:37.609 10:17:08 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:25:37.609 10:17:08 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:25:37.609 10:17:08 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:25:37.609 10:17:08 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:25:37.609 10:17:08 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:25:37.609 10:17:08 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:37.609 10:17:08 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:37.609 10:17:08 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:25:37.609 10:17:08 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=77396 00:25:37.609 10:17:08 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:25:37.609 10:17:08 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 77396 00:25:37.609 10:17:08 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 77396 ']' 00:25:37.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:37.609 10:17:08 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:37.609 10:17:08 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:37.609 10:17:08 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:37.609 10:17:08 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:37.609 10:17:08 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:25:37.868 [2024-12-09 10:17:08.467381] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:25:37.868 [2024-12-09 10:17:08.467563] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77396 ] 00:25:37.868 [2024-12-09 10:17:08.650276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:38.127 [2024-12-09 10:17:08.818981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:38.127 [2024-12-09 10:17:08.819081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:38.127 [2024-12-09 10:17:08.819099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:39.064 10:17:09 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:39.064 10:17:09 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:25:39.064 10:17:09 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:25:39.064 10:17:09 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:25:39.064 10:17:09 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:39.064 10:17:09 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:25:39.064 10:17:09 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:25:39.064 10:17:09 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:25:39.632 10:17:10 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:25:39.632 10:17:10 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:25:39.632 10:17:10 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:25:39.632 10:17:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:25:39.632 10:17:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:39.632 10:17:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:25:39.632 10:17:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:25:39.632 10:17:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:25:39.891 10:17:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:39.891 { 00:25:39.891 "name": "nvme0n1", 00:25:39.891 "aliases": [ 00:25:39.891 "d887f298-bdc7-42dd-9929-b015c6bf8c7b" 00:25:39.891 ], 00:25:39.891 "product_name": "NVMe disk", 00:25:39.891 "block_size": 4096, 00:25:39.891 "num_blocks": 1310720, 00:25:39.891 "uuid": "d887f298-bdc7-42dd-9929-b015c6bf8c7b", 00:25:39.891 "numa_id": -1, 00:25:39.891 "assigned_rate_limits": { 00:25:39.891 "rw_ios_per_sec": 0, 00:25:39.891 "rw_mbytes_per_sec": 0, 00:25:39.891 "r_mbytes_per_sec": 0, 00:25:39.891 "w_mbytes_per_sec": 0 00:25:39.891 }, 00:25:39.891 "claimed": false, 00:25:39.891 "zoned": false, 00:25:39.891 "supported_io_types": { 00:25:39.891 "read": true, 00:25:39.891 "write": true, 00:25:39.891 "unmap": true, 00:25:39.891 "flush": true, 00:25:39.891 "reset": true, 00:25:39.891 "nvme_admin": true, 00:25:39.891 "nvme_io": true, 00:25:39.891 "nvme_io_md": false, 00:25:39.891 "write_zeroes": true, 00:25:39.891 "zcopy": false, 00:25:39.891 "get_zone_info": false, 00:25:39.891 "zone_management": false, 00:25:39.891 "zone_append": false, 00:25:39.891 "compare": true, 00:25:39.891 "compare_and_write": false, 00:25:39.891 "abort": true, 00:25:39.891 "seek_hole": false, 00:25:39.891 "seek_data": false, 00:25:39.891 "copy": true, 00:25:39.891 "nvme_iov_md": false 00:25:39.891 }, 00:25:39.891 "driver_specific": { 00:25:39.891 "nvme": [ 00:25:39.891 { 00:25:39.891 "pci_address": "0000:00:11.0", 00:25:39.891 "trid": { 00:25:39.891 "trtype": "PCIe", 00:25:39.891 "traddr": "0000:00:11.0" 00:25:39.891 }, 00:25:39.891 "ctrlr_data": { 00:25:39.891 "cntlid": 0, 00:25:39.891 "vendor_id": "0x1b36", 00:25:39.891 "model_number": "QEMU NVMe Ctrl", 00:25:39.891 "serial_number": "12341", 00:25:39.891 "firmware_revision": "8.0.0", 00:25:39.891 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:39.891 "oacs": { 00:25:39.891 "security": 0, 00:25:39.891 "format": 1, 00:25:39.891 "firmware": 0, 00:25:39.891 "ns_manage": 1 00:25:39.891 }, 00:25:39.891 "multi_ctrlr": false, 00:25:39.891 "ana_reporting": false 00:25:39.891 }, 00:25:39.891 "vs": { 00:25:39.891 "nvme_version": "1.4" 00:25:39.891 }, 00:25:39.891 "ns_data": { 00:25:39.891 "id": 1, 00:25:39.891 "can_share": false 00:25:39.891 } 00:25:39.891 } 00:25:39.891 ], 00:25:39.891 "mp_policy": "active_passive" 00:25:39.891 } 00:25:39.891 } 00:25:39.891 ]' 00:25:39.891 10:17:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:39.891 10:17:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:25:39.891 10:17:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:39.891 10:17:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:25:39.891 10:17:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:25:39.891 10:17:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:25:39.891 10:17:10 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:25:39.891 10:17:10 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:25:39.891 10:17:10 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:25:39.891 10:17:10 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:39.891 10:17:10 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:40.150 10:17:10 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:25:40.150 10:17:10 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:25:40.408 10:17:11 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=06f9f117-89be-4865-a890-9b761919d9eb 00:25:40.408 10:17:11 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 06f9f117-89be-4865-a890-9b761919d9eb 00:25:40.974 10:17:11 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=58b9ffe1-08b3-4b3d-bac2-bf5a5645e2ca 00:25:40.974 10:17:11 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 58b9ffe1-08b3-4b3d-bac2-bf5a5645e2ca 00:25:40.974 10:17:11 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:25:40.974 10:17:11 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:25:40.974 10:17:11 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=58b9ffe1-08b3-4b3d-bac2-bf5a5645e2ca 00:25:40.974 10:17:11 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:25:40.974 10:17:11 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 58b9ffe1-08b3-4b3d-bac2-bf5a5645e2ca 00:25:40.974 10:17:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=58b9ffe1-08b3-4b3d-bac2-bf5a5645e2ca 00:25:40.974 10:17:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:40.974 10:17:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:25:40.974 10:17:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:25:40.974 10:17:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 58b9ffe1-08b3-4b3d-bac2-bf5a5645e2ca 00:25:41.234 10:17:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:41.234 { 00:25:41.234 "name": "58b9ffe1-08b3-4b3d-bac2-bf5a5645e2ca", 00:25:41.234 "aliases": [ 00:25:41.234 "lvs/nvme0n1p0" 00:25:41.234 ], 00:25:41.234 "product_name": "Logical Volume", 00:25:41.234 "block_size": 4096, 00:25:41.234 "num_blocks": 26476544, 00:25:41.234 "uuid": "58b9ffe1-08b3-4b3d-bac2-bf5a5645e2ca", 00:25:41.234 "assigned_rate_limits": { 00:25:41.234 "rw_ios_per_sec": 0, 00:25:41.234 "rw_mbytes_per_sec": 0, 00:25:41.234 "r_mbytes_per_sec": 0, 00:25:41.234 "w_mbytes_per_sec": 0 00:25:41.234 }, 00:25:41.234 "claimed": false, 00:25:41.234 "zoned": false, 00:25:41.234 "supported_io_types": { 00:25:41.234 "read": true, 00:25:41.234 "write": true, 00:25:41.234 "unmap": true, 00:25:41.234 "flush": false, 00:25:41.234 "reset": true, 00:25:41.234 "nvme_admin": false, 00:25:41.234 "nvme_io": false, 00:25:41.234 "nvme_io_md": false, 00:25:41.234 "write_zeroes": true, 00:25:41.234 "zcopy": false, 00:25:41.234 "get_zone_info": false, 00:25:41.234 "zone_management": false, 00:25:41.234 "zone_append": false, 00:25:41.234 "compare": false, 00:25:41.234 "compare_and_write": false, 00:25:41.234 "abort": false, 00:25:41.234 "seek_hole": true, 00:25:41.234 "seek_data": true, 00:25:41.234 "copy": false, 00:25:41.234 "nvme_iov_md": false 00:25:41.234 }, 00:25:41.235 "driver_specific": { 00:25:41.235 "lvol": { 00:25:41.235 "lvol_store_uuid": "06f9f117-89be-4865-a890-9b761919d9eb", 00:25:41.235 "base_bdev": "nvme0n1", 00:25:41.235 "thin_provision": true, 00:25:41.235 "num_allocated_clusters": 0, 00:25:41.235 "snapshot": false, 00:25:41.235 "clone": false, 00:25:41.235 "esnap_clone": false 00:25:41.235 } 00:25:41.235 } 00:25:41.235 } 00:25:41.235 ]' 00:25:41.235 10:17:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:41.235 10:17:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:25:41.235 10:17:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:41.235 10:17:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:41.235 10:17:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:41.235 10:17:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:25:41.235 10:17:11 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:25:41.235 10:17:11 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:25:41.235 10:17:11 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:25:41.494 10:17:12 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:25:41.494 10:17:12 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:25:41.494 10:17:12 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 58b9ffe1-08b3-4b3d-bac2-bf5a5645e2ca 00:25:41.494 10:17:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=58b9ffe1-08b3-4b3d-bac2-bf5a5645e2ca 00:25:41.494 10:17:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:41.494 10:17:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:25:41.494 10:17:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:25:41.494 10:17:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 58b9ffe1-08b3-4b3d-bac2-bf5a5645e2ca 00:25:41.752 10:17:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:41.752 { 00:25:41.752 "name": "58b9ffe1-08b3-4b3d-bac2-bf5a5645e2ca", 00:25:41.752 "aliases": [ 00:25:41.752 "lvs/nvme0n1p0" 00:25:41.752 ], 00:25:41.752 "product_name": "Logical Volume", 00:25:41.752 "block_size": 4096, 00:25:41.752 "num_blocks": 26476544, 00:25:41.752 "uuid": "58b9ffe1-08b3-4b3d-bac2-bf5a5645e2ca", 00:25:41.752 "assigned_rate_limits": { 00:25:41.752 "rw_ios_per_sec": 0, 00:25:41.752 "rw_mbytes_per_sec": 0, 00:25:41.752 "r_mbytes_per_sec": 0, 00:25:41.752 "w_mbytes_per_sec": 0 00:25:41.752 }, 00:25:41.752 "claimed": false, 00:25:41.752 "zoned": false, 00:25:41.752 "supported_io_types": { 00:25:41.752 "read": true, 00:25:41.752 "write": true, 00:25:41.752 "unmap": true, 00:25:41.752 "flush": false, 00:25:41.752 "reset": true, 00:25:41.752 "nvme_admin": false, 00:25:41.752 "nvme_io": false, 00:25:41.752 "nvme_io_md": false, 00:25:41.752 "write_zeroes": true, 00:25:41.752 "zcopy": false, 00:25:41.752 "get_zone_info": false, 00:25:41.752 "zone_management": false, 00:25:41.752 "zone_append": false, 00:25:41.752 "compare": false, 00:25:41.752 "compare_and_write": false, 00:25:41.752 "abort": false, 00:25:41.752 "seek_hole": true, 00:25:41.752 "seek_data": true, 00:25:41.752 "copy": false, 00:25:41.752 "nvme_iov_md": false 00:25:41.752 }, 00:25:41.752 "driver_specific": { 00:25:41.752 "lvol": { 00:25:41.752 "lvol_store_uuid": "06f9f117-89be-4865-a890-9b761919d9eb", 00:25:41.752 "base_bdev": "nvme0n1", 00:25:41.752 "thin_provision": true, 00:25:41.752 "num_allocated_clusters": 0, 00:25:41.752 "snapshot": false, 00:25:41.752 "clone": false, 00:25:41.752 "esnap_clone": false 00:25:41.752 } 00:25:41.752 } 00:25:41.752 } 00:25:41.752 ]' 00:25:41.752 10:17:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:41.752 10:17:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:25:41.752 10:17:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:42.011 10:17:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:42.011 10:17:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:42.011 10:17:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:25:42.011 10:17:12 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:25:42.011 10:17:12 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:25:42.270 10:17:12 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:25:42.270 10:17:12 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:25:42.270 10:17:12 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:25:42.270 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:25:42.270 10:17:12 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 58b9ffe1-08b3-4b3d-bac2-bf5a5645e2ca 00:25:42.270 10:17:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=58b9ffe1-08b3-4b3d-bac2-bf5a5645e2ca 00:25:42.270 10:17:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:42.270 10:17:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:25:42.270 10:17:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:25:42.270 10:17:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 58b9ffe1-08b3-4b3d-bac2-bf5a5645e2ca 00:25:42.528 10:17:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:42.528 { 00:25:42.528 "name": "58b9ffe1-08b3-4b3d-bac2-bf5a5645e2ca", 00:25:42.528 "aliases": [ 00:25:42.528 "lvs/nvme0n1p0" 00:25:42.528 ], 00:25:42.528 "product_name": "Logical Volume", 00:25:42.528 "block_size": 4096, 00:25:42.528 "num_blocks": 26476544, 00:25:42.528 "uuid": "58b9ffe1-08b3-4b3d-bac2-bf5a5645e2ca", 00:25:42.528 "assigned_rate_limits": { 00:25:42.528 "rw_ios_per_sec": 0, 00:25:42.528 "rw_mbytes_per_sec": 0, 00:25:42.528 "r_mbytes_per_sec": 0, 00:25:42.529 "w_mbytes_per_sec": 0 00:25:42.529 }, 00:25:42.529 "claimed": false, 00:25:42.529 "zoned": false, 00:25:42.529 "supported_io_types": { 00:25:42.529 "read": true, 00:25:42.529 "write": true, 00:25:42.529 "unmap": true, 00:25:42.529 "flush": false, 00:25:42.529 "reset": true, 00:25:42.529 "nvme_admin": false, 00:25:42.529 "nvme_io": false, 00:25:42.529 "nvme_io_md": false, 00:25:42.529 "write_zeroes": true, 00:25:42.529 "zcopy": false, 00:25:42.529 "get_zone_info": false, 00:25:42.529 "zone_management": false, 00:25:42.529 "zone_append": false, 00:25:42.529 "compare": false, 00:25:42.529 "compare_and_write": false, 00:25:42.529 "abort": false, 00:25:42.529 "seek_hole": true, 00:25:42.529 "seek_data": true, 00:25:42.529 "copy": false, 00:25:42.529 "nvme_iov_md": false 00:25:42.529 }, 00:25:42.529 "driver_specific": { 00:25:42.529 "lvol": { 00:25:42.529 "lvol_store_uuid": "06f9f117-89be-4865-a890-9b761919d9eb", 00:25:42.529 "base_bdev": "nvme0n1", 00:25:42.529 "thin_provision": true, 00:25:42.529 "num_allocated_clusters": 0, 00:25:42.529 "snapshot": false, 00:25:42.529 "clone": false, 00:25:42.529 "esnap_clone": false 00:25:42.529 } 00:25:42.529 } 00:25:42.529 } 00:25:42.529 ]' 00:25:42.529 10:17:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:42.529 10:17:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:25:42.529 10:17:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:42.529 10:17:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:42.529 10:17:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:42.529 10:17:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:25:42.529 10:17:13 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:25:42.529 10:17:13 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:25:42.529 10:17:13 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 58b9ffe1-08b3-4b3d-bac2-bf5a5645e2ca -c nvc0n1p0 --l2p_dram_limit 60 00:25:43.097 [2024-12-09 10:17:13.646280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.097 [2024-12-09 10:17:13.646690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:43.098 [2024-12-09 10:17:13.646732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:43.098 [2024-12-09 10:17:13.646748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.098 [2024-12-09 10:17:13.646880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.098 [2024-12-09 10:17:13.646907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:43.098 [2024-12-09 10:17:13.646924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:25:43.098 [2024-12-09 10:17:13.646938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.098 [2024-12-09 10:17:13.647000] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:43.098 [2024-12-09 10:17:13.648163] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:43.098 [2024-12-09 10:17:13.648209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.098 [2024-12-09 10:17:13.648225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:43.098 [2024-12-09 10:17:13.648242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.231 ms 00:25:43.098 [2024-12-09 10:17:13.648260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.098 [2024-12-09 10:17:13.648495] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 1b5e3b67-6e0f-4049-8e91-d4cfcc014636 00:25:43.098 [2024-12-09 10:17:13.650481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.098 [2024-12-09 10:17:13.650542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:25:43.098 [2024-12-09 10:17:13.650573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:25:43.098 [2024-12-09 10:17:13.650588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.098 [2024-12-09 10:17:13.660836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.098 [2024-12-09 10:17:13.660944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:43.098 [2024-12-09 10:17:13.660965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.141 ms 00:25:43.098 [2024-12-09 10:17:13.660981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.098 [2024-12-09 10:17:13.661173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.098 [2024-12-09 10:17:13.661198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:43.098 [2024-12-09 10:17:13.661211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:25:43.098 [2024-12-09 10:17:13.661248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.098 [2024-12-09 10:17:13.661367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.098 [2024-12-09 10:17:13.661398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:43.098 [2024-12-09 10:17:13.661413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:43.098 [2024-12-09 10:17:13.661428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.098 [2024-12-09 10:17:13.661475] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:43.098 [2024-12-09 10:17:13.666932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.098 [2024-12-09 10:17:13.667107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:43.098 [2024-12-09 10:17:13.667146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.464 ms 00:25:43.098 [2024-12-09 10:17:13.667163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.098 [2024-12-09 10:17:13.667256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.098 [2024-12-09 10:17:13.667279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:43.098 [2024-12-09 10:17:13.667296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:25:43.098 [2024-12-09 10:17:13.667308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.098 [2024-12-09 10:17:13.667373] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:25:43.098 [2024-12-09 10:17:13.667613] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:43.098 [2024-12-09 10:17:13.667658] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:43.098 [2024-12-09 10:17:13.667676] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:43.098 [2024-12-09 10:17:13.667694] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:43.098 [2024-12-09 10:17:13.667725] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:43.098 [2024-12-09 10:17:13.667743] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:43.098 [2024-12-09 10:17:13.667755] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:43.098 [2024-12-09 10:17:13.667770] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:43.098 [2024-12-09 10:17:13.667782] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:43.098 [2024-12-09 10:17:13.667814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.098 [2024-12-09 10:17:13.667845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:43.098 [2024-12-09 10:17:13.667864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.444 ms 00:25:43.098 [2024-12-09 10:17:13.667877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.098 [2024-12-09 10:17:13.667986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.098 [2024-12-09 10:17:13.668003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:43.098 [2024-12-09 10:17:13.668018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:25:43.098 [2024-12-09 10:17:13.668030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.098 [2024-12-09 10:17:13.668172] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:43.098 [2024-12-09 10:17:13.668191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:43.098 [2024-12-09 10:17:13.668212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:43.098 [2024-12-09 10:17:13.668224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:43.098 [2024-12-09 10:17:13.668239] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:43.098 [2024-12-09 10:17:13.668250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:43.098 [2024-12-09 10:17:13.668263] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:43.098 [2024-12-09 10:17:13.668274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:43.098 [2024-12-09 10:17:13.668290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:43.098 [2024-12-09 10:17:13.668301] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:43.098 [2024-12-09 10:17:13.668327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:43.098 [2024-12-09 10:17:13.668339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:43.098 [2024-12-09 10:17:13.668353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:43.098 [2024-12-09 10:17:13.668364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:43.098 [2024-12-09 10:17:13.668378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:43.098 [2024-12-09 10:17:13.668389] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:43.098 [2024-12-09 10:17:13.668405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:43.098 [2024-12-09 10:17:13.668417] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:43.098 [2024-12-09 10:17:13.668430] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:43.098 [2024-12-09 10:17:13.668441] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:43.098 [2024-12-09 10:17:13.668454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:43.098 [2024-12-09 10:17:13.668466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:43.098 [2024-12-09 10:17:13.668480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:43.098 [2024-12-09 10:17:13.668492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:43.098 [2024-12-09 10:17:13.668505] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:43.098 [2024-12-09 10:17:13.668516] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:43.098 [2024-12-09 10:17:13.668530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:43.098 [2024-12-09 10:17:13.668540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:43.098 [2024-12-09 10:17:13.668554] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:43.098 [2024-12-09 10:17:13.668565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:43.098 [2024-12-09 10:17:13.668579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:43.098 [2024-12-09 10:17:13.668590] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:43.098 [2024-12-09 10:17:13.668606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:43.098 [2024-12-09 10:17:13.668645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:43.098 [2024-12-09 10:17:13.668661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:43.098 [2024-12-09 10:17:13.668673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:43.098 [2024-12-09 10:17:13.668686] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:43.098 [2024-12-09 10:17:13.668697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:43.098 [2024-12-09 10:17:13.668711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:43.098 [2024-12-09 10:17:13.668722] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:43.098 [2024-12-09 10:17:13.668736] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:43.098 [2024-12-09 10:17:13.668747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:43.098 [2024-12-09 10:17:13.668766] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:43.098 [2024-12-09 10:17:13.668778] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:43.098 [2024-12-09 10:17:13.668793] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:43.098 [2024-12-09 10:17:13.668806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:43.098 [2024-12-09 10:17:13.668820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:43.098 [2024-12-09 10:17:13.668848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:43.098 [2024-12-09 10:17:13.668866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:43.098 [2024-12-09 10:17:13.668878] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:43.098 [2024-12-09 10:17:13.668892] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:43.098 [2024-12-09 10:17:13.668903] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:43.099 [2024-12-09 10:17:13.668917] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:43.099 [2024-12-09 10:17:13.668931] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:43.099 [2024-12-09 10:17:13.668949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:43.099 [2024-12-09 10:17:13.668963] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:43.099 [2024-12-09 10:17:13.668977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:43.099 [2024-12-09 10:17:13.668989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:43.099 [2024-12-09 10:17:13.669003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:43.099 [2024-12-09 10:17:13.669015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:43.099 [2024-12-09 10:17:13.669031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:43.099 [2024-12-09 10:17:13.669043] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:43.099 [2024-12-09 10:17:13.669057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:43.099 [2024-12-09 10:17:13.669070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:43.099 [2024-12-09 10:17:13.669087] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:43.099 [2024-12-09 10:17:13.669099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:43.099 [2024-12-09 10:17:13.669113] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:43.099 [2024-12-09 10:17:13.669125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:43.099 [2024-12-09 10:17:13.669139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:43.099 [2024-12-09 10:17:13.669151] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:43.099 [2024-12-09 10:17:13.669167] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:43.099 [2024-12-09 10:17:13.669183] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:43.099 [2024-12-09 10:17:13.669197] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:43.099 [2024-12-09 10:17:13.669209] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:43.099 [2024-12-09 10:17:13.669234] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:43.099 [2024-12-09 10:17:13.669248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.099 [2024-12-09 10:17:13.669263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:43.099 [2024-12-09 10:17:13.669275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.147 ms 00:25:43.099 [2024-12-09 10:17:13.669289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.099 [2024-12-09 10:17:13.669376] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:25:43.099 [2024-12-09 10:17:13.669400] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:25:47.284 [2024-12-09 10:17:17.412135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.284 [2024-12-09 10:17:17.412279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:25:47.284 [2024-12-09 10:17:17.412304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3742.773 ms 00:25:47.284 [2024-12-09 10:17:17.412320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.284 [2024-12-09 10:17:17.457512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.284 [2024-12-09 10:17:17.457959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:47.284 [2024-12-09 10:17:17.457995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.908 ms 00:25:47.284 [2024-12-09 10:17:17.458013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.284 [2024-12-09 10:17:17.458251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.284 [2024-12-09 10:17:17.458278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:47.284 [2024-12-09 10:17:17.458294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:25:47.284 [2024-12-09 10:17:17.458324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.284 [2024-12-09 10:17:17.516179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.284 [2024-12-09 10:17:17.516286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:47.284 [2024-12-09 10:17:17.516310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.770 ms 00:25:47.284 [2024-12-09 10:17:17.516327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.284 [2024-12-09 10:17:17.516395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.284 [2024-12-09 10:17:17.516416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:47.284 [2024-12-09 10:17:17.516429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:47.284 [2024-12-09 10:17:17.516443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.284 [2024-12-09 10:17:17.517218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.284 [2024-12-09 10:17:17.517251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:47.284 [2024-12-09 10:17:17.517282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.648 ms 00:25:47.284 [2024-12-09 10:17:17.517300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.284 [2024-12-09 10:17:17.517526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.284 [2024-12-09 10:17:17.517557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:47.284 [2024-12-09 10:17:17.517571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:25:47.284 [2024-12-09 10:17:17.517589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.284 [2024-12-09 10:17:17.541720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.284 [2024-12-09 10:17:17.541810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:47.284 [2024-12-09 10:17:17.541847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.097 ms 00:25:47.284 [2024-12-09 10:17:17.541866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.284 [2024-12-09 10:17:17.557246] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:25:47.284 [2024-12-09 10:17:17.580390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.284 [2024-12-09 10:17:17.580474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:47.284 [2024-12-09 10:17:17.580507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.366 ms 00:25:47.284 [2024-12-09 10:17:17.580521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.284 [2024-12-09 10:17:17.650019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.284 [2024-12-09 10:17:17.650175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:25:47.284 [2024-12-09 10:17:17.650209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.377 ms 00:25:47.284 [2024-12-09 10:17:17.650224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.284 [2024-12-09 10:17:17.650547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.284 [2024-12-09 10:17:17.650568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:47.284 [2024-12-09 10:17:17.650587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:25:47.284 [2024-12-09 10:17:17.650599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.284 [2024-12-09 10:17:17.682522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.284 [2024-12-09 10:17:17.682583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:25:47.284 [2024-12-09 10:17:17.682620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.827 ms 00:25:47.284 [2024-12-09 10:17:17.682632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.284 [2024-12-09 10:17:17.713792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.285 [2024-12-09 10:17:17.713860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:25:47.285 [2024-12-09 10:17:17.713885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.065 ms 00:25:47.285 [2024-12-09 10:17:17.713898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.285 [2024-12-09 10:17:17.714809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.285 [2024-12-09 10:17:17.714851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:47.285 [2024-12-09 10:17:17.714871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.856 ms 00:25:47.285 [2024-12-09 10:17:17.714884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.285 [2024-12-09 10:17:17.804119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.285 [2024-12-09 10:17:17.804206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:25:47.285 [2024-12-09 10:17:17.804268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.137 ms 00:25:47.285 [2024-12-09 10:17:17.804286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.285 [2024-12-09 10:17:17.838094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.285 [2024-12-09 10:17:17.838148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:25:47.285 [2024-12-09 10:17:17.838172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.670 ms 00:25:47.285 [2024-12-09 10:17:17.838186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.285 [2024-12-09 10:17:17.870373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.285 [2024-12-09 10:17:17.870435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:25:47.285 [2024-12-09 10:17:17.870459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.117 ms 00:25:47.285 [2024-12-09 10:17:17.870471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.285 [2024-12-09 10:17:17.903336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.285 [2024-12-09 10:17:17.903628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:47.285 [2024-12-09 10:17:17.903667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.796 ms 00:25:47.285 [2024-12-09 10:17:17.903682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.285 [2024-12-09 10:17:17.903753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.285 [2024-12-09 10:17:17.903774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:47.285 [2024-12-09 10:17:17.903799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:47.285 [2024-12-09 10:17:17.903811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.285 [2024-12-09 10:17:17.904007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.285 [2024-12-09 10:17:17.904032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:47.285 [2024-12-09 10:17:17.904060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:25:47.285 [2024-12-09 10:17:17.904073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.285 [2024-12-09 10:17:17.905526] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4258.671 ms, result 0 00:25:47.285 { 00:25:47.285 "name": "ftl0", 00:25:47.285 "uuid": "1b5e3b67-6e0f-4049-8e91-d4cfcc014636" 00:25:47.285 } 00:25:47.285 10:17:17 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:25:47.285 10:17:17 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:25:47.285 10:17:17 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:47.285 10:17:17 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:25:47.285 10:17:17 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:47.285 10:17:17 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:47.285 10:17:17 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:47.543 10:17:18 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:25:47.802 [ 00:25:47.802 { 00:25:47.802 "name": "ftl0", 00:25:47.802 "aliases": [ 00:25:47.802 "1b5e3b67-6e0f-4049-8e91-d4cfcc014636" 00:25:47.802 ], 00:25:47.802 "product_name": "FTL disk", 00:25:47.802 "block_size": 4096, 00:25:47.802 "num_blocks": 20971520, 00:25:47.802 "uuid": "1b5e3b67-6e0f-4049-8e91-d4cfcc014636", 00:25:47.802 "assigned_rate_limits": { 00:25:47.802 "rw_ios_per_sec": 0, 00:25:47.802 "rw_mbytes_per_sec": 0, 00:25:47.802 "r_mbytes_per_sec": 0, 00:25:47.802 "w_mbytes_per_sec": 0 00:25:47.802 }, 00:25:47.803 "claimed": false, 00:25:47.803 "zoned": false, 00:25:47.803 "supported_io_types": { 00:25:47.803 "read": true, 00:25:47.803 "write": true, 00:25:47.803 "unmap": true, 00:25:47.803 "flush": true, 00:25:47.803 "reset": false, 00:25:47.803 "nvme_admin": false, 00:25:47.803 "nvme_io": false, 00:25:47.803 "nvme_io_md": false, 00:25:47.803 "write_zeroes": true, 00:25:47.803 "zcopy": false, 00:25:47.803 "get_zone_info": false, 00:25:47.803 "zone_management": false, 00:25:47.803 "zone_append": false, 00:25:47.803 "compare": false, 00:25:47.803 "compare_and_write": false, 00:25:47.803 "abort": false, 00:25:47.803 "seek_hole": false, 00:25:47.803 "seek_data": false, 00:25:47.803 "copy": false, 00:25:47.803 "nvme_iov_md": false 00:25:47.803 }, 00:25:47.803 "driver_specific": { 00:25:47.803 "ftl": { 00:25:47.803 "base_bdev": "58b9ffe1-08b3-4b3d-bac2-bf5a5645e2ca", 00:25:47.803 "cache": "nvc0n1p0" 00:25:47.803 } 00:25:47.803 } 00:25:47.803 } 00:25:47.803 ] 00:25:47.803 10:17:18 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:25:47.803 10:17:18 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:25:47.803 10:17:18 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:25:48.370 10:17:18 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:25:48.370 10:17:18 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:25:48.370 [2024-12-09 10:17:19.146898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.370 [2024-12-09 10:17:19.146973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:48.370 [2024-12-09 10:17:19.146998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:48.370 [2024-12-09 10:17:19.147015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.370 [2024-12-09 10:17:19.147067] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:48.370 [2024-12-09 10:17:19.150879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.370 [2024-12-09 10:17:19.150925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:48.370 [2024-12-09 10:17:19.150946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.780 ms 00:25:48.370 [2024-12-09 10:17:19.150959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.370 [2024-12-09 10:17:19.151507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.370 [2024-12-09 10:17:19.151541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:48.370 [2024-12-09 10:17:19.151560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.497 ms 00:25:48.370 [2024-12-09 10:17:19.151572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.370 [2024-12-09 10:17:19.154860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.370 [2024-12-09 10:17:19.154898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:48.371 [2024-12-09 10:17:19.154923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.238 ms 00:25:48.371 [2024-12-09 10:17:19.154935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.371 [2024-12-09 10:17:19.162008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.371 [2024-12-09 10:17:19.162076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:48.371 [2024-12-09 10:17:19.162095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.036 ms 00:25:48.371 [2024-12-09 10:17:19.162109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.631 [2024-12-09 10:17:19.195340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.631 [2024-12-09 10:17:19.195400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:48.631 [2024-12-09 10:17:19.195455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.116 ms 00:25:48.631 [2024-12-09 10:17:19.195476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.631 [2024-12-09 10:17:19.215217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.631 [2024-12-09 10:17:19.215312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:48.631 [2024-12-09 10:17:19.215394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.567 ms 00:25:48.631 [2024-12-09 10:17:19.215411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.631 [2024-12-09 10:17:19.215722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.631 [2024-12-09 10:17:19.215756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:48.631 [2024-12-09 10:17:19.215775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.188 ms 00:25:48.631 [2024-12-09 10:17:19.215787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.631 [2024-12-09 10:17:19.247703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.631 [2024-12-09 10:17:19.247746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:48.631 [2024-12-09 10:17:19.247772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.878 ms 00:25:48.631 [2024-12-09 10:17:19.247784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.631 [2024-12-09 10:17:19.279368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.631 [2024-12-09 10:17:19.279427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:48.631 [2024-12-09 10:17:19.279463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.513 ms 00:25:48.631 [2024-12-09 10:17:19.279485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.631 [2024-12-09 10:17:19.310621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.631 [2024-12-09 10:17:19.310709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:48.631 [2024-12-09 10:17:19.310731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.070 ms 00:25:48.631 [2024-12-09 10:17:19.310743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.631 [2024-12-09 10:17:19.341731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.631 [2024-12-09 10:17:19.341774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:48.631 [2024-12-09 10:17:19.341795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.820 ms 00:25:48.631 [2024-12-09 10:17:19.341808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.631 [2024-12-09 10:17:19.341884] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:48.631 [2024-12-09 10:17:19.341911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:48.631 [2024-12-09 10:17:19.341931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:48.631 [2024-12-09 10:17:19.341954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:48.631 [2024-12-09 10:17:19.341970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:48.631 [2024-12-09 10:17:19.341984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:48.631 [2024-12-09 10:17:19.342000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:48.631 [2024-12-09 10:17:19.342015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:48.631 [2024-12-09 10:17:19.342035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:48.631 [2024-12-09 10:17:19.342059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:48.631 [2024-12-09 10:17:19.342084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:48.631 [2024-12-09 10:17:19.342098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:48.631 [2024-12-09 10:17:19.342113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:48.631 [2024-12-09 10:17:19.342127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:48.631 [2024-12-09 10:17:19.342143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:48.631 [2024-12-09 10:17:19.342156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:48.631 [2024-12-09 10:17:19.342172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:48.631 [2024-12-09 10:17:19.342186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:48.631 [2024-12-09 10:17:19.342202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:48.631 [2024-12-09 10:17:19.342216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:48.631 [2024-12-09 10:17:19.342232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:48.631 [2024-12-09 10:17:19.342245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:48.631 [2024-12-09 10:17:19.342264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:48.631 [2024-12-09 10:17:19.342278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:48.631 [2024-12-09 10:17:19.342296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:48.631 [2024-12-09 10:17:19.342309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:48.631 [2024-12-09 10:17:19.342325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:48.631 [2024-12-09 10:17:19.342339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:48.631 [2024-12-09 10:17:19.342355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:48.631 [2024-12-09 10:17:19.342368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:48.631 [2024-12-09 10:17:19.342384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:48.631 [2024-12-09 10:17:19.342397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:48.631 [2024-12-09 10:17:19.342413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:48.631 [2024-12-09 10:17:19.342426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:48.631 [2024-12-09 10:17:19.342451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:48.631 [2024-12-09 10:17:19.342466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:48.631 [2024-12-09 10:17:19.342490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:48.631 [2024-12-09 10:17:19.342504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:48.631 [2024-12-09 10:17:19.342521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:48.631 [2024-12-09 10:17:19.342535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:48.631 [2024-12-09 10:17:19.342554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:48.632 [2024-12-09 10:17:19.342568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:48.632 [2024-12-09 10:17:19.342584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:48.632 [2024-12-09 10:17:19.342603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:48.632 [2024-12-09 10:17:19.342619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:48.632 [2024-12-09 10:17:19.342632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:48.632 [2024-12-09 10:17:19.342649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:48.632 [2024-12-09 10:17:19.342663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:48.632 [2024-12-09 10:17:19.342681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:48.632 [2024-12-09 10:17:19.342695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:48.632 [2024-12-09 10:17:19.342712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:48.632 [2024-12-09 10:17:19.342726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:48.632 [2024-12-09 10:17:19.342743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:48.632 [2024-12-09 10:17:19.342756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:48.632 [2024-12-09 10:17:19.342773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:48.632 [2024-12-09 10:17:19.342786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:48.632 [2024-12-09 10:17:19.342805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:48.632 [2024-12-09 10:17:19.342819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:48.632 [2024-12-09 10:17:19.342848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:48.632 [2024-12-09 10:17:19.342863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:48.632 [2024-12-09 10:17:19.342879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:48.632 [2024-12-09 10:17:19.342893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:48.632 [2024-12-09 10:17:19.342909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:48.632 [2024-12-09 10:17:19.342923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:48.632 [2024-12-09 10:17:19.342939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:48.632 [2024-12-09 10:17:19.342953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:48.632 [2024-12-09 10:17:19.342976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:48.632 [2024-12-09 10:17:19.342991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:48.632 [2024-12-09 10:17:19.343008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:48.632 [2024-12-09 10:17:19.343022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:48.632 [2024-12-09 10:17:19.343038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:48.632 [2024-12-09 10:17:19.343052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:48.632 [2024-12-09 10:17:19.343072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:48.632 [2024-12-09 10:17:19.343085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:48.632 [2024-12-09 10:17:19.343107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:48.632 [2024-12-09 10:17:19.343121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:48.632 [2024-12-09 10:17:19.343137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:48.632 [2024-12-09 10:17:19.343151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:48.632 [2024-12-09 10:17:19.343167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:48.632 [2024-12-09 10:17:19.343181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:48.632 [2024-12-09 10:17:19.343197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:48.632 [2024-12-09 10:17:19.343211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:48.632 [2024-12-09 10:17:19.343251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:48.632 [2024-12-09 10:17:19.343265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:48.632 [2024-12-09 10:17:19.343282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:48.632 [2024-12-09 10:17:19.343296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:48.632 [2024-12-09 10:17:19.343312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:48.632 [2024-12-09 10:17:19.343326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:48.632 [2024-12-09 10:17:19.343346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:48.632 [2024-12-09 10:17:19.343359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:48.632 [2024-12-09 10:17:19.343375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:48.632 [2024-12-09 10:17:19.343389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:48.632 [2024-12-09 10:17:19.343405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:48.632 [2024-12-09 10:17:19.343419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:48.632 [2024-12-09 10:17:19.343435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:48.632 [2024-12-09 10:17:19.343449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:48.632 [2024-12-09 10:17:19.343466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:48.632 [2024-12-09 10:17:19.343480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:48.632 [2024-12-09 10:17:19.343502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:48.632 [2024-12-09 10:17:19.343517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:48.632 [2024-12-09 10:17:19.343535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:48.632 [2024-12-09 10:17:19.343558] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:48.632 [2024-12-09 10:17:19.343573] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1b5e3b67-6e0f-4049-8e91-d4cfcc014636 00:25:48.632 [2024-12-09 10:17:19.343585] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:48.632 [2024-12-09 10:17:19.343602] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:48.632 [2024-12-09 10:17:19.343613] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:48.632 [2024-12-09 10:17:19.343632] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:48.632 [2024-12-09 10:17:19.343643] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:48.632 [2024-12-09 10:17:19.343658] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:48.632 [2024-12-09 10:17:19.343670] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:48.632 [2024-12-09 10:17:19.343683] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:48.632 [2024-12-09 10:17:19.343694] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:48.632 [2024-12-09 10:17:19.343709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.632 [2024-12-09 10:17:19.343721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:48.632 [2024-12-09 10:17:19.343737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.830 ms 00:25:48.632 [2024-12-09 10:17:19.343749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.632 [2024-12-09 10:17:19.361476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.632 [2024-12-09 10:17:19.361547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:48.632 [2024-12-09 10:17:19.361583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.596 ms 00:25:48.632 [2024-12-09 10:17:19.361595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.632 [2024-12-09 10:17:19.362130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.632 [2024-12-09 10:17:19.362164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:48.632 [2024-12-09 10:17:19.362183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.489 ms 00:25:48.632 [2024-12-09 10:17:19.362195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.632 [2024-12-09 10:17:19.424399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:48.632 [2024-12-09 10:17:19.424480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:48.632 [2024-12-09 10:17:19.424529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:48.632 [2024-12-09 10:17:19.424542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.632 [2024-12-09 10:17:19.424648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:48.632 [2024-12-09 10:17:19.424665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:48.632 [2024-12-09 10:17:19.424681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:48.632 [2024-12-09 10:17:19.424693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.632 [2024-12-09 10:17:19.424852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:48.632 [2024-12-09 10:17:19.424881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:48.632 [2024-12-09 10:17:19.424899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:48.632 [2024-12-09 10:17:19.424911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.632 [2024-12-09 10:17:19.424952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:48.632 [2024-12-09 10:17:19.424970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:48.633 [2024-12-09 10:17:19.424984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:48.633 [2024-12-09 10:17:19.424996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.892 [2024-12-09 10:17:19.545779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:48.892 [2024-12-09 10:17:19.545861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:48.892 [2024-12-09 10:17:19.545888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:48.892 [2024-12-09 10:17:19.545902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.892 [2024-12-09 10:17:19.637523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:48.892 [2024-12-09 10:17:19.637670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:48.892 [2024-12-09 10:17:19.637693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:48.892 [2024-12-09 10:17:19.637707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.892 [2024-12-09 10:17:19.637911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:48.892 [2024-12-09 10:17:19.637933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:48.892 [2024-12-09 10:17:19.637965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:48.892 [2024-12-09 10:17:19.637977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.892 [2024-12-09 10:17:19.638099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:48.892 [2024-12-09 10:17:19.638120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:48.892 [2024-12-09 10:17:19.638136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:48.892 [2024-12-09 10:17:19.638148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.892 [2024-12-09 10:17:19.638314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:48.892 [2024-12-09 10:17:19.638345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:48.892 [2024-12-09 10:17:19.638362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:48.892 [2024-12-09 10:17:19.638378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.892 [2024-12-09 10:17:19.638458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:48.892 [2024-12-09 10:17:19.638477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:48.892 [2024-12-09 10:17:19.638493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:48.892 [2024-12-09 10:17:19.638505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.892 [2024-12-09 10:17:19.638571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:48.892 [2024-12-09 10:17:19.638590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:48.892 [2024-12-09 10:17:19.638606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:48.892 [2024-12-09 10:17:19.638622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.892 [2024-12-09 10:17:19.638698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:48.892 [2024-12-09 10:17:19.638717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:48.892 [2024-12-09 10:17:19.638732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:48.892 [2024-12-09 10:17:19.638743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.892 [2024-12-09 10:17:19.638993] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 492.036 ms, result 0 00:25:48.892 true 00:25:48.892 10:17:19 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 77396 00:25:48.892 10:17:19 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 77396 ']' 00:25:48.892 10:17:19 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 77396 00:25:48.892 10:17:19 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:25:48.892 10:17:19 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:48.892 10:17:19 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77396 00:25:49.151 10:17:19 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:49.151 10:17:19 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:49.151 killing process with pid 77396 00:25:49.151 10:17:19 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77396' 00:25:49.151 10:17:19 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 77396 00:25:49.151 10:17:19 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 77396 00:25:54.427 10:17:24 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:25:54.427 10:17:24 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:25:54.427 10:17:24 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:25:54.427 10:17:24 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:54.427 10:17:24 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:25:54.427 10:17:24 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:25:54.427 10:17:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:25:54.427 10:17:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:54.427 10:17:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:54.428 10:17:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:54.428 10:17:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:54.428 10:17:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:25:54.428 10:17:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:54.428 10:17:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:54.428 10:17:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:54.428 10:17:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:25:54.428 10:17:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:54.428 10:17:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:25:54.428 10:17:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:25:54.428 10:17:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:25:54.428 10:17:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:54.428 10:17:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:25:54.428 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:25:54.428 fio-3.35 00:25:54.428 Starting 1 thread 00:25:59.707 00:25:59.707 test: (groupid=0, jobs=1): err= 0: pid=77625: Mon Dec 9 10:17:30 2024 00:25:59.707 read: IOPS=930, BW=61.8MiB/s (64.8MB/s)(255MiB/4120msec) 00:25:59.707 slat (usec): min=5, max=146, avg= 7.70, stdev= 4.00 00:25:59.707 clat (usec): min=336, max=830, avg=475.05, stdev=53.76 00:25:59.707 lat (usec): min=343, max=836, avg=482.75, stdev=54.61 00:25:59.707 clat percentiles (usec): 00:25:59.707 | 1.00th=[ 375], 5.00th=[ 392], 10.00th=[ 404], 20.00th=[ 449], 00:25:59.707 | 30.00th=[ 457], 40.00th=[ 461], 50.00th=[ 465], 60.00th=[ 474], 00:25:59.707 | 70.00th=[ 482], 80.00th=[ 523], 90.00th=[ 545], 95.00th=[ 570], 00:25:59.707 | 99.00th=[ 627], 99.50th=[ 668], 99.90th=[ 775], 99.95th=[ 775], 00:25:59.707 | 99.99th=[ 832] 00:25:59.707 write: IOPS=936, BW=62.2MiB/s (65.2MB/s)(256MiB/4116msec); 0 zone resets 00:25:59.707 slat (usec): min=19, max=209, avg=24.71, stdev= 6.91 00:25:59.707 clat (usec): min=400, max=1004, avg=549.98, stdev=66.80 00:25:59.707 lat (usec): min=422, max=1033, avg=574.69, stdev=67.25 00:25:59.707 clat percentiles (usec): 00:25:59.707 | 1.00th=[ 429], 5.00th=[ 474], 10.00th=[ 486], 20.00th=[ 494], 00:25:59.707 | 30.00th=[ 506], 40.00th=[ 529], 50.00th=[ 553], 60.00th=[ 562], 00:25:59.707 | 70.00th=[ 570], 80.00th=[ 586], 90.00th=[ 627], 95.00th=[ 660], 00:25:59.707 | 99.00th=[ 799], 99.50th=[ 848], 99.90th=[ 930], 99.95th=[ 947], 00:25:59.707 | 99.99th=[ 1004] 00:25:59.707 bw ( KiB/s): min=61472, max=66232, per=100.00%, avg=63750.00, stdev=1763.89, samples=8 00:25:59.707 iops : min= 904, max= 974, avg=937.50, stdev=25.94, samples=8 00:25:59.707 lat (usec) : 500=51.76%, 750=47.24%, 1000=0.99% 00:25:59.707 lat (msec) : 2=0.01% 00:25:59.707 cpu : usr=98.45%, sys=0.49%, ctx=6, majf=0, minf=1169 00:25:59.707 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:59.707 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.707 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.707 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:59.707 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:59.707 00:25:59.707 Run status group 0 (all jobs): 00:25:59.707 READ: bw=61.8MiB/s (64.8MB/s), 61.8MiB/s-61.8MiB/s (64.8MB/s-64.8MB/s), io=255MiB (267MB), run=4120-4120msec 00:25:59.707 WRITE: bw=62.2MiB/s (65.2MB/s), 62.2MiB/s-62.2MiB/s (65.2MB/s-65.2MB/s), io=256MiB (269MB), run=4116-4116msec 00:26:01.610 ----------------------------------------------------- 00:26:01.610 Suppressions used: 00:26:01.610 count bytes template 00:26:01.610 1 5 /usr/src/fio/parse.c 00:26:01.610 1 8 libtcmalloc_minimal.so 00:26:01.610 1 904 libcrypto.so 00:26:01.610 ----------------------------------------------------- 00:26:01.610 00:26:01.610 10:17:32 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:26:01.610 10:17:32 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:01.610 10:17:32 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:26:01.610 10:17:32 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:26:01.610 10:17:32 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:26:01.610 10:17:32 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:01.610 10:17:32 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:26:01.610 10:17:32 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:26:01.610 10:17:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:26:01.610 10:17:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:26:01.610 10:17:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:01.610 10:17:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:26:01.610 10:17:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:01.610 10:17:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:26:01.610 10:17:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:26:01.610 10:17:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:01.610 10:17:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:01.610 10:17:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:26:01.610 10:17:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:01.610 10:17:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:26:01.610 10:17:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:26:01.610 10:17:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:26:01.610 10:17:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:01.610 10:17:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:26:01.868 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:26:01.868 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:26:01.868 fio-3.35 00:26:01.868 Starting 2 threads 00:26:33.931 00:26:33.931 first_half: (groupid=0, jobs=1): err= 0: pid=77734: Mon Dec 9 10:18:04 2024 00:26:33.931 read: IOPS=2174, BW=8697KiB/s (8905kB/s)(255MiB/30010msec) 00:26:33.931 slat (usec): min=5, max=137, avg= 7.94, stdev= 2.02 00:26:33.931 clat (usec): min=956, max=346639, avg=43553.12, stdev=22950.04 00:26:33.931 lat (usec): min=964, max=346646, avg=43561.06, stdev=22950.23 00:26:33.931 clat percentiles (msec): 00:26:33.931 | 1.00th=[ 11], 5.00th=[ 37], 10.00th=[ 39], 20.00th=[ 40], 00:26:33.931 | 30.00th=[ 40], 40.00th=[ 40], 50.00th=[ 40], 60.00th=[ 41], 00:26:33.931 | 70.00th=[ 41], 80.00th=[ 42], 90.00th=[ 47], 95.00th=[ 54], 00:26:33.931 | 99.00th=[ 174], 99.50th=[ 194], 99.90th=[ 264], 99.95th=[ 296], 00:26:33.931 | 99.99th=[ 334] 00:26:33.931 write: IOPS=2563, BW=10.0MiB/s (10.5MB/s)(256MiB/25570msec); 0 zone resets 00:26:33.931 slat (usec): min=5, max=853, avg=10.32, stdev= 7.55 00:26:33.931 clat (usec): min=495, max=118573, avg=15118.24, stdev=24977.64 00:26:33.931 lat (usec): min=517, max=118581, avg=15128.57, stdev=24977.84 00:26:33.931 clat percentiles (usec): 00:26:33.931 | 1.00th=[ 1029], 5.00th=[ 1352], 10.00th=[ 1582], 20.00th=[ 2147], 00:26:33.931 | 30.00th=[ 4228], 40.00th=[ 6521], 50.00th=[ 7635], 60.00th=[ 8455], 00:26:33.931 | 70.00th=[ 9896], 80.00th=[ 13960], 90.00th=[ 38011], 95.00th=[ 92799], 00:26:33.931 | 99.00th=[106431], 99.50th=[108528], 99.90th=[112722], 99.95th=[115868], 00:26:33.931 | 99.99th=[117965] 00:26:33.931 bw ( KiB/s): min= 968, max=42672, per=98.34%, avg=20164.92, stdev=11519.90, samples=26 00:26:33.931 iops : min= 242, max=10668, avg=5041.23, stdev=2879.97, samples=26 00:26:33.931 lat (usec) : 500=0.01%, 750=0.03%, 1000=0.39% 00:26:33.931 lat (msec) : 2=8.90%, 4=5.50%, 10=20.85%, 20=10.65%, 50=46.20% 00:26:33.931 lat (msec) : 100=4.72%, 250=2.70%, 500=0.07% 00:26:33.931 cpu : usr=99.12%, sys=0.16%, ctx=103, majf=0, minf=5616 00:26:33.931 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.8% 00:26:33.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.931 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:33.931 issued rwts: total=65246,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.931 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:33.931 second_half: (groupid=0, jobs=1): err= 0: pid=77735: Mon Dec 9 10:18:04 2024 00:26:33.931 read: IOPS=2183, BW=8734KiB/s (8943kB/s)(255MiB/29853msec) 00:26:33.931 slat (usec): min=5, max=187, avg= 7.95, stdev= 2.09 00:26:33.931 clat (usec): min=1033, max=353393, avg=44651.89, stdev=22594.68 00:26:33.931 lat (usec): min=1044, max=353401, avg=44659.84, stdev=22594.84 00:26:33.931 clat percentiles (msec): 00:26:33.931 | 1.00th=[ 8], 5.00th=[ 39], 10.00th=[ 40], 20.00th=[ 40], 00:26:33.931 | 30.00th=[ 40], 40.00th=[ 40], 50.00th=[ 40], 60.00th=[ 41], 00:26:33.931 | 70.00th=[ 41], 80.00th=[ 44], 90.00th=[ 48], 95.00th=[ 68], 00:26:33.931 | 99.00th=[ 163], 99.50th=[ 192], 99.90th=[ 232], 99.95th=[ 247], 00:26:33.931 | 99.99th=[ 342] 00:26:33.931 write: IOPS=2825, BW=11.0MiB/s (11.6MB/s)(256MiB/23197msec); 0 zone resets 00:26:33.931 slat (usec): min=6, max=207, avg=10.30, stdev= 5.34 00:26:33.931 clat (usec): min=527, max=118869, avg=13867.02, stdev=24898.37 00:26:33.931 lat (usec): min=539, max=118880, avg=13877.31, stdev=24898.52 00:26:33.931 clat percentiles (usec): 00:26:33.931 | 1.00th=[ 1057], 5.00th=[ 1369], 10.00th=[ 1549], 20.00th=[ 1827], 00:26:33.931 | 30.00th=[ 2245], 40.00th=[ 4113], 50.00th=[ 5604], 60.00th=[ 6652], 00:26:33.931 | 70.00th=[ 8717], 80.00th=[ 14091], 90.00th=[ 25560], 95.00th=[ 91751], 00:26:33.931 | 99.00th=[106431], 99.50th=[108528], 99.90th=[112722], 99.95th=[113771], 00:26:33.931 | 99.99th=[116917] 00:26:33.931 bw ( KiB/s): min= 1040, max=40592, per=100.00%, avg=22795.13, stdev=8308.01, samples=23 00:26:33.931 iops : min= 260, max=10148, avg=5698.78, stdev=2077.00, samples=23 00:26:33.931 lat (usec) : 750=0.07%, 1000=0.27% 00:26:33.931 lat (msec) : 2=12.49%, 4=7.08%, 10=16.99%, 20=8.87%, 50=46.30% 00:26:33.931 lat (msec) : 100=4.86%, 250=3.06%, 500=0.02% 00:26:33.931 cpu : usr=98.87%, sys=0.27%, ctx=50, majf=0, minf=5518 00:26:33.931 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:26:33.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.931 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:33.931 issued rwts: total=65182,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.931 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:33.931 00:26:33.931 Run status group 0 (all jobs): 00:26:33.931 READ: bw=17.0MiB/s (17.8MB/s), 8697KiB/s-8734KiB/s (8905kB/s-8943kB/s), io=509MiB (534MB), run=29853-30010msec 00:26:33.931 WRITE: bw=20.0MiB/s (21.0MB/s), 10.0MiB/s-11.0MiB/s (10.5MB/s-11.6MB/s), io=512MiB (537MB), run=23197-25570msec 00:26:35.911 ----------------------------------------------------- 00:26:35.911 Suppressions used: 00:26:35.911 count bytes template 00:26:35.911 2 10 /usr/src/fio/parse.c 00:26:35.911 3 288 /usr/src/fio/iolog.c 00:26:35.911 1 8 libtcmalloc_minimal.so 00:26:35.911 1 904 libcrypto.so 00:26:35.911 ----------------------------------------------------- 00:26:35.911 00:26:35.911 10:18:06 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:26:35.911 10:18:06 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:35.911 10:18:06 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:26:35.911 10:18:06 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:26:35.911 10:18:06 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:26:35.911 10:18:06 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:35.911 10:18:06 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:26:35.911 10:18:06 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:26:35.911 10:18:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:26:35.911 10:18:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:26:35.911 10:18:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:35.911 10:18:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:26:35.911 10:18:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:35.911 10:18:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:26:35.911 10:18:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:26:35.911 10:18:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:35.911 10:18:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:36.170 10:18:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:26:36.170 10:18:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:36.170 10:18:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:26:36.170 10:18:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:26:36.170 10:18:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:26:36.170 10:18:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:36.170 10:18:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:26:36.170 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:26:36.170 fio-3.35 00:26:36.170 Starting 1 thread 00:26:54.256 00:26:54.256 test: (groupid=0, jobs=1): err= 0: pid=78103: Mon Dec 9 10:18:24 2024 00:26:54.256 read: IOPS=6372, BW=24.9MiB/s (26.1MB/s)(255MiB/10231msec) 00:26:54.256 slat (nsec): min=4997, max=42074, avg=7176.34, stdev=1979.89 00:26:54.256 clat (usec): min=938, max=39217, avg=20072.63, stdev=1012.95 00:26:54.256 lat (usec): min=944, max=39225, avg=20079.81, stdev=1012.95 00:26:54.256 clat percentiles (usec): 00:26:54.256 | 1.00th=[19006], 5.00th=[19268], 10.00th=[19530], 20.00th=[19530], 00:26:54.256 | 30.00th=[19792], 40.00th=[19792], 50.00th=[20055], 60.00th=[20055], 00:26:54.256 | 70.00th=[20317], 80.00th=[20317], 90.00th=[20579], 95.00th=[20841], 00:26:54.256 | 99.00th=[23462], 99.50th=[23725], 99.90th=[29492], 99.95th=[34341], 00:26:54.256 | 99.99th=[38536] 00:26:54.256 write: IOPS=11.2k, BW=43.8MiB/s (45.9MB/s)(256MiB/5844msec); 0 zone resets 00:26:54.256 slat (usec): min=6, max=546, avg=10.41, stdev= 5.88 00:26:54.256 clat (usec): min=655, max=61920, avg=11352.11, stdev=13745.26 00:26:54.256 lat (usec): min=665, max=61930, avg=11362.52, stdev=13745.20 00:26:54.256 clat percentiles (usec): 00:26:54.256 | 1.00th=[ 955], 5.00th=[ 1172], 10.00th=[ 1303], 20.00th=[ 1483], 00:26:54.256 | 30.00th=[ 1696], 40.00th=[ 2245], 50.00th=[ 7635], 60.00th=[ 9110], 00:26:54.256 | 70.00th=[10683], 80.00th=[13042], 90.00th=[39584], 95.00th=[42206], 00:26:54.256 | 99.00th=[47973], 99.50th=[49546], 99.90th=[51643], 99.95th=[53216], 00:26:54.256 | 99.99th=[58983] 00:26:54.256 bw ( KiB/s): min=27288, max=63760, per=97.40%, avg=43690.67, stdev=9022.49, samples=12 00:26:54.256 iops : min= 6822, max=15940, avg=10922.67, stdev=2255.62, samples=12 00:26:54.256 lat (usec) : 750=0.01%, 1000=0.75% 00:26:54.256 lat (msec) : 2=18.31%, 4=1.86%, 10=12.55%, 20=35.07%, 50=31.30% 00:26:54.256 lat (msec) : 100=0.15% 00:26:54.256 cpu : usr=98.87%, sys=0.29%, ctx=31, majf=0, minf=5565 00:26:54.256 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:26:54.256 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:54.256 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:54.256 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:54.256 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:54.256 00:26:54.256 Run status group 0 (all jobs): 00:26:54.256 READ: bw=24.9MiB/s (26.1MB/s), 24.9MiB/s-24.9MiB/s (26.1MB/s-26.1MB/s), io=255MiB (267MB), run=10231-10231msec 00:26:54.256 WRITE: bw=43.8MiB/s (45.9MB/s), 43.8MiB/s-43.8MiB/s (45.9MB/s-45.9MB/s), io=256MiB (268MB), run=5844-5844msec 00:26:56.159 ----------------------------------------------------- 00:26:56.159 Suppressions used: 00:26:56.159 count bytes template 00:26:56.159 1 5 /usr/src/fio/parse.c 00:26:56.159 2 192 /usr/src/fio/iolog.c 00:26:56.159 1 8 libtcmalloc_minimal.so 00:26:56.159 1 904 libcrypto.so 00:26:56.159 ----------------------------------------------------- 00:26:56.159 00:26:56.159 10:18:26 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:26:56.159 10:18:26 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:56.159 10:18:26 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:26:56.159 10:18:26 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:56.159 Remove shared memory files 00:26:56.159 10:18:26 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:26:56.159 10:18:26 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:56.159 10:18:26 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:26:56.159 10:18:26 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:26:56.159 10:18:26 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid58098 /dev/shm/spdk_tgt_trace.pid76303 00:26:56.159 10:18:26 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:56.159 10:18:26 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:26:56.159 ************************************ 00:26:56.159 END TEST ftl_fio_basic 00:26:56.159 ************************************ 00:26:56.159 00:26:56.159 real 1m18.574s 00:26:56.159 user 2m54.681s 00:26:56.159 sys 0m4.774s 00:26:56.159 10:18:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:56.159 10:18:26 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:26:56.159 10:18:26 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:26:56.159 10:18:26 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:56.159 10:18:26 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:56.159 10:18:26 ftl -- common/autotest_common.sh@10 -- # set +x 00:26:56.159 ************************************ 00:26:56.159 START TEST ftl_bdevperf 00:26:56.159 ************************************ 00:26:56.159 10:18:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:26:56.159 * Looking for test storage... 00:26:56.159 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:56.159 10:18:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:56.159 10:18:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:26:56.159 10:18:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:56.159 10:18:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:56.159 10:18:26 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:56.159 10:18:26 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:56.159 10:18:26 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:56.159 10:18:26 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:26:56.159 10:18:26 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:26:56.159 10:18:26 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:26:56.159 10:18:26 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:26:56.159 10:18:26 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:26:56.159 10:18:26 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:26:56.159 10:18:26 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:26:56.159 10:18:26 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:56.159 10:18:26 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:26:56.159 10:18:26 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:26:56.159 10:18:26 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:56.159 10:18:26 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:56.159 10:18:26 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:26:56.159 10:18:26 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:26:56.159 10:18:26 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:56.159 10:18:26 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:26:56.159 10:18:26 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:26:56.159 10:18:26 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:26:56.159 10:18:26 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:26:56.159 10:18:26 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:56.159 10:18:26 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:26:56.159 10:18:26 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:26:56.159 10:18:26 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:56.159 10:18:26 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:56.159 10:18:26 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:26:56.159 10:18:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:56.159 10:18:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:56.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.159 --rc genhtml_branch_coverage=1 00:26:56.159 --rc genhtml_function_coverage=1 00:26:56.159 --rc genhtml_legend=1 00:26:56.159 --rc geninfo_all_blocks=1 00:26:56.159 --rc geninfo_unexecuted_blocks=1 00:26:56.159 00:26:56.159 ' 00:26:56.159 10:18:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:56.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.159 --rc genhtml_branch_coverage=1 00:26:56.159 --rc genhtml_function_coverage=1 00:26:56.159 --rc genhtml_legend=1 00:26:56.159 --rc geninfo_all_blocks=1 00:26:56.159 --rc geninfo_unexecuted_blocks=1 00:26:56.159 00:26:56.159 ' 00:26:56.159 10:18:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:56.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.159 --rc genhtml_branch_coverage=1 00:26:56.159 --rc genhtml_function_coverage=1 00:26:56.159 --rc genhtml_legend=1 00:26:56.159 --rc geninfo_all_blocks=1 00:26:56.159 --rc geninfo_unexecuted_blocks=1 00:26:56.159 00:26:56.159 ' 00:26:56.159 10:18:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:56.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.159 --rc genhtml_branch_coverage=1 00:26:56.159 --rc genhtml_function_coverage=1 00:26:56.159 --rc genhtml_legend=1 00:26:56.159 --rc geninfo_all_blocks=1 00:26:56.159 --rc geninfo_unexecuted_blocks=1 00:26:56.159 00:26:56.159 ' 00:26:56.159 10:18:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:56.159 10:18:26 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:26:56.159 10:18:26 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:56.159 10:18:26 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:56.159 10:18:26 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:56.159 10:18:26 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:56.159 10:18:26 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:56.159 10:18:26 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:56.159 10:18:26 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:56.159 10:18:26 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:56.159 10:18:26 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:56.159 10:18:26 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:56.159 10:18:26 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:56.159 10:18:26 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:56.159 10:18:26 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:56.159 10:18:26 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:56.159 10:18:26 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:56.159 10:18:26 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:56.159 10:18:26 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:56.159 10:18:26 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:56.159 10:18:26 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:56.160 10:18:26 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:56.160 10:18:26 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:56.160 10:18:26 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:56.160 10:18:26 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:56.160 10:18:26 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:56.160 10:18:26 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:56.160 10:18:26 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:56.160 10:18:26 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:56.160 10:18:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:26:56.160 10:18:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:26:56.160 10:18:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:26:56.160 10:18:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:56.160 10:18:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:26:56.160 10:18:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=78370 00:26:56.160 10:18:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:26:56.160 10:18:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:26:56.160 10:18:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 78370 00:26:56.160 10:18:26 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 78370 ']' 00:26:56.160 10:18:26 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:56.160 10:18:26 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:56.160 10:18:26 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:56.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:56.160 10:18:26 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:56.160 10:18:26 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:56.419 [2024-12-09 10:18:27.051242] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:26:56.419 [2024-12-09 10:18:27.051710] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78370 ] 00:26:56.677 [2024-12-09 10:18:27.231612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:56.677 [2024-12-09 10:18:27.370811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:57.244 10:18:28 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:57.244 10:18:28 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:26:57.244 10:18:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:26:57.244 10:18:28 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:26:57.244 10:18:28 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:26:57.244 10:18:28 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:26:57.244 10:18:28 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:26:57.244 10:18:28 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:26:57.810 10:18:28 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:26:57.810 10:18:28 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:26:57.810 10:18:28 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:26:57.810 10:18:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:26:57.810 10:18:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:57.810 10:18:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:26:57.810 10:18:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:26:57.810 10:18:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:26:58.068 10:18:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:58.068 { 00:26:58.068 "name": "nvme0n1", 00:26:58.068 "aliases": [ 00:26:58.068 "fe7db139-3a58-47c2-a7c2-2ae89c071add" 00:26:58.068 ], 00:26:58.068 "product_name": "NVMe disk", 00:26:58.068 "block_size": 4096, 00:26:58.068 "num_blocks": 1310720, 00:26:58.068 "uuid": "fe7db139-3a58-47c2-a7c2-2ae89c071add", 00:26:58.068 "numa_id": -1, 00:26:58.068 "assigned_rate_limits": { 00:26:58.068 "rw_ios_per_sec": 0, 00:26:58.068 "rw_mbytes_per_sec": 0, 00:26:58.068 "r_mbytes_per_sec": 0, 00:26:58.068 "w_mbytes_per_sec": 0 00:26:58.068 }, 00:26:58.068 "claimed": true, 00:26:58.068 "claim_type": "read_many_write_one", 00:26:58.068 "zoned": false, 00:26:58.068 "supported_io_types": { 00:26:58.068 "read": true, 00:26:58.068 "write": true, 00:26:58.068 "unmap": true, 00:26:58.068 "flush": true, 00:26:58.068 "reset": true, 00:26:58.068 "nvme_admin": true, 00:26:58.068 "nvme_io": true, 00:26:58.068 "nvme_io_md": false, 00:26:58.068 "write_zeroes": true, 00:26:58.068 "zcopy": false, 00:26:58.068 "get_zone_info": false, 00:26:58.068 "zone_management": false, 00:26:58.068 "zone_append": false, 00:26:58.068 "compare": true, 00:26:58.068 "compare_and_write": false, 00:26:58.068 "abort": true, 00:26:58.068 "seek_hole": false, 00:26:58.068 "seek_data": false, 00:26:58.068 "copy": true, 00:26:58.068 "nvme_iov_md": false 00:26:58.068 }, 00:26:58.068 "driver_specific": { 00:26:58.068 "nvme": [ 00:26:58.068 { 00:26:58.068 "pci_address": "0000:00:11.0", 00:26:58.068 "trid": { 00:26:58.068 "trtype": "PCIe", 00:26:58.068 "traddr": "0000:00:11.0" 00:26:58.068 }, 00:26:58.068 "ctrlr_data": { 00:26:58.068 "cntlid": 0, 00:26:58.068 "vendor_id": "0x1b36", 00:26:58.068 "model_number": "QEMU NVMe Ctrl", 00:26:58.068 "serial_number": "12341", 00:26:58.068 "firmware_revision": "8.0.0", 00:26:58.068 "subnqn": "nqn.2019-08.org.qemu:12341", 00:26:58.068 "oacs": { 00:26:58.068 "security": 0, 00:26:58.068 "format": 1, 00:26:58.068 "firmware": 0, 00:26:58.068 "ns_manage": 1 00:26:58.068 }, 00:26:58.068 "multi_ctrlr": false, 00:26:58.068 "ana_reporting": false 00:26:58.068 }, 00:26:58.068 "vs": { 00:26:58.068 "nvme_version": "1.4" 00:26:58.068 }, 00:26:58.068 "ns_data": { 00:26:58.068 "id": 1, 00:26:58.068 "can_share": false 00:26:58.068 } 00:26:58.068 } 00:26:58.068 ], 00:26:58.068 "mp_policy": "active_passive" 00:26:58.068 } 00:26:58.068 } 00:26:58.068 ]' 00:26:58.068 10:18:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:58.068 10:18:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:26:58.068 10:18:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:58.068 10:18:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:26:58.068 10:18:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:26:58.068 10:18:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:26:58.068 10:18:28 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:26:58.068 10:18:28 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:26:58.068 10:18:28 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:26:58.068 10:18:28 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:58.068 10:18:28 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:58.326 10:18:29 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=06f9f117-89be-4865-a890-9b761919d9eb 00:26:58.326 10:18:29 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:26:58.326 10:18:29 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 06f9f117-89be-4865-a890-9b761919d9eb 00:26:58.892 10:18:29 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:26:59.150 10:18:29 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=2f74f35a-b68a-440b-970b-705901425827 00:26:59.150 10:18:29 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 2f74f35a-b68a-440b-970b-705901425827 00:26:59.408 10:18:29 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=1075491d-d683-43a0-9715-1b01f2b52433 00:26:59.408 10:18:29 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 1075491d-d683-43a0-9715-1b01f2b52433 00:26:59.408 10:18:29 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:26:59.408 10:18:29 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:26:59.408 10:18:29 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=1075491d-d683-43a0-9715-1b01f2b52433 00:26:59.408 10:18:29 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:26:59.408 10:18:29 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 1075491d-d683-43a0-9715-1b01f2b52433 00:26:59.408 10:18:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=1075491d-d683-43a0-9715-1b01f2b52433 00:26:59.408 10:18:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:59.408 10:18:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:26:59.408 10:18:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:26:59.408 10:18:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1075491d-d683-43a0-9715-1b01f2b52433 00:26:59.666 10:18:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:59.666 { 00:26:59.666 "name": "1075491d-d683-43a0-9715-1b01f2b52433", 00:26:59.666 "aliases": [ 00:26:59.666 "lvs/nvme0n1p0" 00:26:59.666 ], 00:26:59.666 "product_name": "Logical Volume", 00:26:59.666 "block_size": 4096, 00:26:59.666 "num_blocks": 26476544, 00:26:59.666 "uuid": "1075491d-d683-43a0-9715-1b01f2b52433", 00:26:59.666 "assigned_rate_limits": { 00:26:59.666 "rw_ios_per_sec": 0, 00:26:59.666 "rw_mbytes_per_sec": 0, 00:26:59.666 "r_mbytes_per_sec": 0, 00:26:59.666 "w_mbytes_per_sec": 0 00:26:59.666 }, 00:26:59.666 "claimed": false, 00:26:59.666 "zoned": false, 00:26:59.666 "supported_io_types": { 00:26:59.666 "read": true, 00:26:59.666 "write": true, 00:26:59.666 "unmap": true, 00:26:59.666 "flush": false, 00:26:59.666 "reset": true, 00:26:59.666 "nvme_admin": false, 00:26:59.666 "nvme_io": false, 00:26:59.666 "nvme_io_md": false, 00:26:59.666 "write_zeroes": true, 00:26:59.666 "zcopy": false, 00:26:59.666 "get_zone_info": false, 00:26:59.666 "zone_management": false, 00:26:59.666 "zone_append": false, 00:26:59.666 "compare": false, 00:26:59.666 "compare_and_write": false, 00:26:59.666 "abort": false, 00:26:59.666 "seek_hole": true, 00:26:59.666 "seek_data": true, 00:26:59.666 "copy": false, 00:26:59.666 "nvme_iov_md": false 00:26:59.666 }, 00:26:59.666 "driver_specific": { 00:26:59.666 "lvol": { 00:26:59.666 "lvol_store_uuid": "2f74f35a-b68a-440b-970b-705901425827", 00:26:59.666 "base_bdev": "nvme0n1", 00:26:59.666 "thin_provision": true, 00:26:59.666 "num_allocated_clusters": 0, 00:26:59.666 "snapshot": false, 00:26:59.666 "clone": false, 00:26:59.666 "esnap_clone": false 00:26:59.666 } 00:26:59.666 } 00:26:59.666 } 00:26:59.666 ]' 00:26:59.666 10:18:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:59.666 10:18:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:26:59.666 10:18:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:59.666 10:18:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:59.666 10:18:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:59.666 10:18:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:26:59.666 10:18:30 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:26:59.666 10:18:30 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:26:59.666 10:18:30 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:27:00.233 10:18:30 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:27:00.233 10:18:30 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:27:00.233 10:18:30 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 1075491d-d683-43a0-9715-1b01f2b52433 00:27:00.233 10:18:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=1075491d-d683-43a0-9715-1b01f2b52433 00:27:00.233 10:18:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:00.233 10:18:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:27:00.234 10:18:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:27:00.234 10:18:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1075491d-d683-43a0-9715-1b01f2b52433 00:27:00.234 10:18:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:00.234 { 00:27:00.234 "name": "1075491d-d683-43a0-9715-1b01f2b52433", 00:27:00.234 "aliases": [ 00:27:00.234 "lvs/nvme0n1p0" 00:27:00.234 ], 00:27:00.234 "product_name": "Logical Volume", 00:27:00.234 "block_size": 4096, 00:27:00.234 "num_blocks": 26476544, 00:27:00.234 "uuid": "1075491d-d683-43a0-9715-1b01f2b52433", 00:27:00.234 "assigned_rate_limits": { 00:27:00.234 "rw_ios_per_sec": 0, 00:27:00.234 "rw_mbytes_per_sec": 0, 00:27:00.234 "r_mbytes_per_sec": 0, 00:27:00.234 "w_mbytes_per_sec": 0 00:27:00.234 }, 00:27:00.234 "claimed": false, 00:27:00.234 "zoned": false, 00:27:00.234 "supported_io_types": { 00:27:00.234 "read": true, 00:27:00.234 "write": true, 00:27:00.234 "unmap": true, 00:27:00.234 "flush": false, 00:27:00.234 "reset": true, 00:27:00.234 "nvme_admin": false, 00:27:00.234 "nvme_io": false, 00:27:00.234 "nvme_io_md": false, 00:27:00.234 "write_zeroes": true, 00:27:00.234 "zcopy": false, 00:27:00.234 "get_zone_info": false, 00:27:00.234 "zone_management": false, 00:27:00.234 "zone_append": false, 00:27:00.234 "compare": false, 00:27:00.234 "compare_and_write": false, 00:27:00.234 "abort": false, 00:27:00.234 "seek_hole": true, 00:27:00.234 "seek_data": true, 00:27:00.234 "copy": false, 00:27:00.234 "nvme_iov_md": false 00:27:00.234 }, 00:27:00.234 "driver_specific": { 00:27:00.234 "lvol": { 00:27:00.234 "lvol_store_uuid": "2f74f35a-b68a-440b-970b-705901425827", 00:27:00.234 "base_bdev": "nvme0n1", 00:27:00.234 "thin_provision": true, 00:27:00.234 "num_allocated_clusters": 0, 00:27:00.234 "snapshot": false, 00:27:00.234 "clone": false, 00:27:00.234 "esnap_clone": false 00:27:00.234 } 00:27:00.234 } 00:27:00.234 } 00:27:00.234 ]' 00:27:00.234 10:18:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:00.492 10:18:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:27:00.492 10:18:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:00.492 10:18:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:00.492 10:18:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:00.492 10:18:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:27:00.492 10:18:31 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:27:00.492 10:18:31 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:27:00.750 10:18:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:27:00.750 10:18:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 1075491d-d683-43a0-9715-1b01f2b52433 00:27:00.750 10:18:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=1075491d-d683-43a0-9715-1b01f2b52433 00:27:00.750 10:18:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:00.750 10:18:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:27:00.750 10:18:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:27:00.750 10:18:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1075491d-d683-43a0-9715-1b01f2b52433 00:27:01.009 10:18:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:01.009 { 00:27:01.009 "name": "1075491d-d683-43a0-9715-1b01f2b52433", 00:27:01.009 "aliases": [ 00:27:01.009 "lvs/nvme0n1p0" 00:27:01.009 ], 00:27:01.009 "product_name": "Logical Volume", 00:27:01.009 "block_size": 4096, 00:27:01.009 "num_blocks": 26476544, 00:27:01.009 "uuid": "1075491d-d683-43a0-9715-1b01f2b52433", 00:27:01.009 "assigned_rate_limits": { 00:27:01.009 "rw_ios_per_sec": 0, 00:27:01.009 "rw_mbytes_per_sec": 0, 00:27:01.009 "r_mbytes_per_sec": 0, 00:27:01.009 "w_mbytes_per_sec": 0 00:27:01.009 }, 00:27:01.009 "claimed": false, 00:27:01.009 "zoned": false, 00:27:01.009 "supported_io_types": { 00:27:01.009 "read": true, 00:27:01.009 "write": true, 00:27:01.009 "unmap": true, 00:27:01.009 "flush": false, 00:27:01.009 "reset": true, 00:27:01.009 "nvme_admin": false, 00:27:01.009 "nvme_io": false, 00:27:01.009 "nvme_io_md": false, 00:27:01.009 "write_zeroes": true, 00:27:01.009 "zcopy": false, 00:27:01.009 "get_zone_info": false, 00:27:01.009 "zone_management": false, 00:27:01.009 "zone_append": false, 00:27:01.009 "compare": false, 00:27:01.009 "compare_and_write": false, 00:27:01.009 "abort": false, 00:27:01.009 "seek_hole": true, 00:27:01.009 "seek_data": true, 00:27:01.009 "copy": false, 00:27:01.009 "nvme_iov_md": false 00:27:01.009 }, 00:27:01.009 "driver_specific": { 00:27:01.009 "lvol": { 00:27:01.009 "lvol_store_uuid": "2f74f35a-b68a-440b-970b-705901425827", 00:27:01.009 "base_bdev": "nvme0n1", 00:27:01.009 "thin_provision": true, 00:27:01.009 "num_allocated_clusters": 0, 00:27:01.009 "snapshot": false, 00:27:01.009 "clone": false, 00:27:01.009 "esnap_clone": false 00:27:01.009 } 00:27:01.009 } 00:27:01.009 } 00:27:01.009 ]' 00:27:01.009 10:18:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:01.009 10:18:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:27:01.009 10:18:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:01.267 10:18:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:01.267 10:18:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:01.267 10:18:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:27:01.267 10:18:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:27:01.267 10:18:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 1075491d-d683-43a0-9715-1b01f2b52433 -c nvc0n1p0 --l2p_dram_limit 20 00:27:01.526 [2024-12-09 10:18:32.102299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.526 [2024-12-09 10:18:32.102392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:01.526 [2024-12-09 10:18:32.102424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:01.526 [2024-12-09 10:18:32.102441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.526 [2024-12-09 10:18:32.102546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.526 [2024-12-09 10:18:32.102571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:01.526 [2024-12-09 10:18:32.102585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:27:01.526 [2024-12-09 10:18:32.102601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.526 [2024-12-09 10:18:32.102631] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:01.526 [2024-12-09 10:18:32.103685] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:01.526 [2024-12-09 10:18:32.103720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.526 [2024-12-09 10:18:32.103738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:01.526 [2024-12-09 10:18:32.103752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.097 ms 00:27:01.526 [2024-12-09 10:18:32.103767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.526 [2024-12-09 10:18:32.103901] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 2b9ccff7-f8ac-49d0-866d-ef67adfbd97a 00:27:01.526 [2024-12-09 10:18:32.106319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.526 [2024-12-09 10:18:32.106362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:27:01.526 [2024-12-09 10:18:32.106389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:27:01.526 [2024-12-09 10:18:32.106402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.526 [2024-12-09 10:18:32.117697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.526 [2024-12-09 10:18:32.117782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:01.526 [2024-12-09 10:18:32.117810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.196 ms 00:27:01.526 [2024-12-09 10:18:32.117842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.526 [2024-12-09 10:18:32.118040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.526 [2024-12-09 10:18:32.118077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:01.526 [2024-12-09 10:18:32.118118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:27:01.526 [2024-12-09 10:18:32.118132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.526 [2024-12-09 10:18:32.118255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.526 [2024-12-09 10:18:32.118276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:01.526 [2024-12-09 10:18:32.118293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:27:01.526 [2024-12-09 10:18:32.118306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.526 [2024-12-09 10:18:32.118347] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:01.526 [2024-12-09 10:18:32.123911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.526 [2024-12-09 10:18:32.123959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:01.526 [2024-12-09 10:18:32.123977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.582 ms 00:27:01.526 [2024-12-09 10:18:32.123998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.526 [2024-12-09 10:18:32.124055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.526 [2024-12-09 10:18:32.124075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:01.526 [2024-12-09 10:18:32.124089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:27:01.526 [2024-12-09 10:18:32.124104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.526 [2024-12-09 10:18:32.124159] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:27:01.526 [2024-12-09 10:18:32.124373] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:01.526 [2024-12-09 10:18:32.124400] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:01.526 [2024-12-09 10:18:32.124421] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:01.526 [2024-12-09 10:18:32.124438] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:01.526 [2024-12-09 10:18:32.124455] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:01.526 [2024-12-09 10:18:32.124469] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:01.526 [2024-12-09 10:18:32.124484] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:01.526 [2024-12-09 10:18:32.124497] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:01.526 [2024-12-09 10:18:32.124514] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:01.526 [2024-12-09 10:18:32.124533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.526 [2024-12-09 10:18:32.124548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:01.526 [2024-12-09 10:18:32.124562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.375 ms 00:27:01.526 [2024-12-09 10:18:32.124577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.526 [2024-12-09 10:18:32.124686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.526 [2024-12-09 10:18:32.124706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:01.526 [2024-12-09 10:18:32.124719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:27:01.526 [2024-12-09 10:18:32.124736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.526 [2024-12-09 10:18:32.124864] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:01.526 [2024-12-09 10:18:32.124892] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:01.526 [2024-12-09 10:18:32.124905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:01.526 [2024-12-09 10:18:32.124921] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:01.526 [2024-12-09 10:18:32.124934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:01.526 [2024-12-09 10:18:32.124948] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:01.526 [2024-12-09 10:18:32.124960] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:01.526 [2024-12-09 10:18:32.124973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:01.526 [2024-12-09 10:18:32.124985] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:01.526 [2024-12-09 10:18:32.124998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:01.526 [2024-12-09 10:18:32.125010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:01.526 [2024-12-09 10:18:32.125042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:01.526 [2024-12-09 10:18:32.125053] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:01.526 [2024-12-09 10:18:32.125067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:01.527 [2024-12-09 10:18:32.125078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:01.527 [2024-12-09 10:18:32.125104] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:01.527 [2024-12-09 10:18:32.125115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:01.527 [2024-12-09 10:18:32.125128] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:01.527 [2024-12-09 10:18:32.125139] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:01.527 [2024-12-09 10:18:32.125155] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:01.527 [2024-12-09 10:18:32.125167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:01.527 [2024-12-09 10:18:32.125181] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:01.527 [2024-12-09 10:18:32.125192] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:01.527 [2024-12-09 10:18:32.125205] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:01.527 [2024-12-09 10:18:32.125216] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:01.527 [2024-12-09 10:18:32.125230] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:01.527 [2024-12-09 10:18:32.125241] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:01.527 [2024-12-09 10:18:32.125254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:01.527 [2024-12-09 10:18:32.125265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:01.527 [2024-12-09 10:18:32.125279] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:01.527 [2024-12-09 10:18:32.125290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:01.527 [2024-12-09 10:18:32.125305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:01.527 [2024-12-09 10:18:32.125317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:01.527 [2024-12-09 10:18:32.125331] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:01.527 [2024-12-09 10:18:32.125342] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:01.527 [2024-12-09 10:18:32.125355] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:01.527 [2024-12-09 10:18:32.125366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:01.527 [2024-12-09 10:18:32.125382] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:01.527 [2024-12-09 10:18:32.125394] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:01.527 [2024-12-09 10:18:32.125408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:01.527 [2024-12-09 10:18:32.125419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:01.527 [2024-12-09 10:18:32.125433] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:01.527 [2024-12-09 10:18:32.125444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:01.527 [2024-12-09 10:18:32.125458] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:01.527 [2024-12-09 10:18:32.125470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:01.527 [2024-12-09 10:18:32.125484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:01.527 [2024-12-09 10:18:32.125496] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:01.527 [2024-12-09 10:18:32.125513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:01.527 [2024-12-09 10:18:32.125525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:01.527 [2024-12-09 10:18:32.125539] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:01.527 [2024-12-09 10:18:32.125550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:01.527 [2024-12-09 10:18:32.125565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:01.527 [2024-12-09 10:18:32.125577] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:01.527 [2024-12-09 10:18:32.125593] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:01.527 [2024-12-09 10:18:32.125608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:01.527 [2024-12-09 10:18:32.125624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:01.527 [2024-12-09 10:18:32.125637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:01.527 [2024-12-09 10:18:32.125651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:01.527 [2024-12-09 10:18:32.125663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:01.527 [2024-12-09 10:18:32.125678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:01.527 [2024-12-09 10:18:32.125690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:01.527 [2024-12-09 10:18:32.125712] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:01.527 [2024-12-09 10:18:32.125724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:01.527 [2024-12-09 10:18:32.125742] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:01.527 [2024-12-09 10:18:32.125754] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:01.527 [2024-12-09 10:18:32.125768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:01.527 [2024-12-09 10:18:32.125780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:01.527 [2024-12-09 10:18:32.125794] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:01.527 [2024-12-09 10:18:32.125807] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:01.527 [2024-12-09 10:18:32.125821] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:01.527 [2024-12-09 10:18:32.125851] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:01.527 [2024-12-09 10:18:32.125871] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:01.527 [2024-12-09 10:18:32.125885] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:01.527 [2024-12-09 10:18:32.125900] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:01.527 [2024-12-09 10:18:32.125913] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:01.527 [2024-12-09 10:18:32.125929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.527 [2024-12-09 10:18:32.125941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:01.527 [2024-12-09 10:18:32.125957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.153 ms 00:27:01.527 [2024-12-09 10:18:32.125969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.527 [2024-12-09 10:18:32.126026] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:27:01.527 [2024-12-09 10:18:32.126051] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:27:04.812 [2024-12-09 10:18:35.061923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.812 [2024-12-09 10:18:35.062312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:27:04.813 [2024-12-09 10:18:35.062356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2935.906 ms 00:27:04.813 [2024-12-09 10:18:35.062372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.813 [2024-12-09 10:18:35.105583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.813 [2024-12-09 10:18:35.105950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:04.813 [2024-12-09 10:18:35.105995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.908 ms 00:27:04.813 [2024-12-09 10:18:35.106011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.813 [2024-12-09 10:18:35.106268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.813 [2024-12-09 10:18:35.106290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:04.813 [2024-12-09 10:18:35.106312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:27:04.813 [2024-12-09 10:18:35.106326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.813 [2024-12-09 10:18:35.158824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.813 [2024-12-09 10:18:35.158920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:04.813 [2024-12-09 10:18:35.158948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.431 ms 00:27:04.813 [2024-12-09 10:18:35.158962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.813 [2024-12-09 10:18:35.159044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.813 [2024-12-09 10:18:35.159062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:04.813 [2024-12-09 10:18:35.159079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:27:04.813 [2024-12-09 10:18:35.159095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.813 [2024-12-09 10:18:35.159995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.813 [2024-12-09 10:18:35.160032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:04.813 [2024-12-09 10:18:35.160053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.792 ms 00:27:04.813 [2024-12-09 10:18:35.160066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.813 [2024-12-09 10:18:35.160246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.813 [2024-12-09 10:18:35.160266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:04.813 [2024-12-09 10:18:35.160286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.149 ms 00:27:04.813 [2024-12-09 10:18:35.160297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.813 [2024-12-09 10:18:35.181169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.813 [2024-12-09 10:18:35.181543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:04.813 [2024-12-09 10:18:35.181586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.838 ms 00:27:04.813 [2024-12-09 10:18:35.181618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.813 [2024-12-09 10:18:35.196603] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:27:04.813 [2024-12-09 10:18:35.204637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.813 [2024-12-09 10:18:35.204712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:04.813 [2024-12-09 10:18:35.204735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.819 ms 00:27:04.813 [2024-12-09 10:18:35.204751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.813 [2024-12-09 10:18:35.278463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.813 [2024-12-09 10:18:35.278566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:27:04.813 [2024-12-09 10:18:35.278610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.641 ms 00:27:04.813 [2024-12-09 10:18:35.278626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.813 [2024-12-09 10:18:35.278925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.813 [2024-12-09 10:18:35.278956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:04.813 [2024-12-09 10:18:35.278971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.212 ms 00:27:04.813 [2024-12-09 10:18:35.278991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.813 [2024-12-09 10:18:35.311318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.813 [2024-12-09 10:18:35.311408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:27:04.813 [2024-12-09 10:18:35.311431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.202 ms 00:27:04.813 [2024-12-09 10:18:35.311455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.813 [2024-12-09 10:18:35.342189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.813 [2024-12-09 10:18:35.342259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:27:04.813 [2024-12-09 10:18:35.342281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.680 ms 00:27:04.813 [2024-12-09 10:18:35.342296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.813 [2024-12-09 10:18:35.343262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.813 [2024-12-09 10:18:35.343523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:04.813 [2024-12-09 10:18:35.343553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.917 ms 00:27:04.813 [2024-12-09 10:18:35.343570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.813 [2024-12-09 10:18:35.449018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.813 [2024-12-09 10:18:35.449366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:27:04.813 [2024-12-09 10:18:35.449402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 105.360 ms 00:27:04.813 [2024-12-09 10:18:35.449421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.813 [2024-12-09 10:18:35.482389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.813 [2024-12-09 10:18:35.482466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:27:04.813 [2024-12-09 10:18:35.482509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.858 ms 00:27:04.813 [2024-12-09 10:18:35.482525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.813 [2024-12-09 10:18:35.513130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.813 [2024-12-09 10:18:35.513189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:27:04.813 [2024-12-09 10:18:35.513208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.552 ms 00:27:04.813 [2024-12-09 10:18:35.513221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.813 [2024-12-09 10:18:35.543948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.813 [2024-12-09 10:18:35.543999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:04.813 [2024-12-09 10:18:35.544018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.679 ms 00:27:04.813 [2024-12-09 10:18:35.544032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.813 [2024-12-09 10:18:35.544080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.813 [2024-12-09 10:18:35.544104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:04.813 [2024-12-09 10:18:35.544118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:04.813 [2024-12-09 10:18:35.544131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.813 [2024-12-09 10:18:35.544247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.813 [2024-12-09 10:18:35.544269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:04.813 [2024-12-09 10:18:35.544298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:27:04.813 [2024-12-09 10:18:35.544330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.813 [2024-12-09 10:18:35.545818] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3442.919 ms, result 0 00:27:04.813 { 00:27:04.813 "name": "ftl0", 00:27:04.813 "uuid": "2b9ccff7-f8ac-49d0-866d-ef67adfbd97a" 00:27:04.813 } 00:27:04.813 10:18:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:27:04.813 10:18:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:27:04.813 10:18:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:27:05.383 10:18:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:27:05.383 [2024-12-09 10:18:36.006256] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:27:05.383 I/O size of 69632 is greater than zero copy threshold (65536). 00:27:05.383 Zero copy mechanism will not be used. 00:27:05.383 Running I/O for 4 seconds... 00:27:07.253 1593.00 IOPS, 105.79 MiB/s [2024-12-09T10:18:39.425Z] 1617.00 IOPS, 107.38 MiB/s [2024-12-09T10:18:40.360Z] 1675.33 IOPS, 111.25 MiB/s [2024-12-09T10:18:40.360Z] 1701.50 IOPS, 112.99 MiB/s 00:27:09.563 Latency(us) 00:27:09.563 [2024-12-09T10:18:40.360Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:09.563 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:27:09.563 ftl0 : 4.00 1700.77 112.94 0.00 0.00 615.11 242.04 3247.01 00:27:09.563 [2024-12-09T10:18:40.360Z] =================================================================================================================== 00:27:09.563 [2024-12-09T10:18:40.360Z] Total : 1700.77 112.94 0.00 0.00 615.11 242.04 3247.01 00:27:09.563 [2024-12-09 10:18:40.020320] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:27:09.563 { 00:27:09.563 "results": [ 00:27:09.563 { 00:27:09.563 "job": "ftl0", 00:27:09.563 "core_mask": "0x1", 00:27:09.563 "workload": "randwrite", 00:27:09.563 "status": "finished", 00:27:09.563 "queue_depth": 1, 00:27:09.563 "io_size": 69632, 00:27:09.563 "runtime": 4.002305, 00:27:09.563 "iops": 1700.7699313270728, 00:27:09.563 "mibps": 112.94175325218842, 00:27:09.563 "io_failed": 0, 00:27:09.563 "io_timeout": 0, 00:27:09.563 "avg_latency_us": 615.1078574195013, 00:27:09.563 "min_latency_us": 242.03636363636363, 00:27:09.563 "max_latency_us": 3247.010909090909 00:27:09.563 } 00:27:09.563 ], 00:27:09.563 "core_count": 1 00:27:09.563 } 00:27:09.563 10:18:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:27:09.563 [2024-12-09 10:18:40.164840] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:27:09.563 Running I/O for 4 seconds... 00:27:11.434 7298.00 IOPS, 28.51 MiB/s [2024-12-09T10:18:43.607Z] 7431.00 IOPS, 29.03 MiB/s [2024-12-09T10:18:44.542Z] 7471.00 IOPS, 29.18 MiB/s [2024-12-09T10:18:44.542Z] 7281.00 IOPS, 28.44 MiB/s 00:27:13.745 Latency(us) 00:27:13.745 [2024-12-09T10:18:44.542Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:13.745 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:27:13.745 ftl0 : 4.03 7263.32 28.37 0.00 0.00 17563.28 329.54 41466.41 00:27:13.745 [2024-12-09T10:18:44.542Z] =================================================================================================================== 00:27:13.745 [2024-12-09T10:18:44.542Z] Total : 7263.32 28.37 0.00 0.00 17563.28 0.00 41466.41 00:27:13.745 { 00:27:13.745 "results": [ 00:27:13.745 { 00:27:13.745 "job": "ftl0", 00:27:13.745 "core_mask": "0x1", 00:27:13.745 "workload": "randwrite", 00:27:13.745 "status": "finished", 00:27:13.745 "queue_depth": 128, 00:27:13.745 "io_size": 4096, 00:27:13.745 "runtime": 4.027358, 00:27:13.745 "iops": 7263.322505722113, 00:27:13.745 "mibps": 28.372353537977006, 00:27:13.745 "io_failed": 0, 00:27:13.745 "io_timeout": 0, 00:27:13.745 "avg_latency_us": 17563.27899058961, 00:27:13.745 "min_latency_us": 329.5418181818182, 00:27:13.745 "max_latency_us": 41466.41454545454 00:27:13.745 } 00:27:13.745 ], 00:27:13.745 "core_count": 1 00:27:13.745 } 00:27:13.745 [2024-12-09 10:18:44.205008] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:27:13.745 10:18:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:27:13.745 [2024-12-09 10:18:44.364688] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:27:13.745 Running I/O for 4 seconds... 00:27:15.613 5576.00 IOPS, 21.78 MiB/s [2024-12-09T10:18:47.786Z] 5708.50 IOPS, 22.30 MiB/s [2024-12-09T10:18:48.724Z] 5721.00 IOPS, 22.35 MiB/s [2024-12-09T10:18:48.724Z] 5719.25 IOPS, 22.34 MiB/s 00:27:17.927 Latency(us) 00:27:17.927 [2024-12-09T10:18:48.724Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:17.927 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:17.927 Verification LBA range: start 0x0 length 0x1400000 00:27:17.927 ftl0 : 4.02 5729.47 22.38 0.00 0.00 22254.64 374.23 34317.03 00:27:17.927 [2024-12-09T10:18:48.724Z] =================================================================================================================== 00:27:17.927 [2024-12-09T10:18:48.724Z] Total : 5729.47 22.38 0.00 0.00 22254.64 0.00 34317.03 00:27:17.927 [2024-12-09 10:18:48.401938] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:27:17.927 { 00:27:17.927 "results": [ 00:27:17.927 { 00:27:17.927 "job": "ftl0", 00:27:17.927 "core_mask": "0x1", 00:27:17.927 "workload": "verify", 00:27:17.927 "status": "finished", 00:27:17.927 "verify_range": { 00:27:17.927 "start": 0, 00:27:17.927 "length": 20971520 00:27:17.927 }, 00:27:17.927 "queue_depth": 128, 00:27:17.927 "io_size": 4096, 00:27:17.927 "runtime": 4.015205, 00:27:17.927 "iops": 5729.470848935484, 00:27:17.927 "mibps": 22.380745503654236, 00:27:17.927 "io_failed": 0, 00:27:17.927 "io_timeout": 0, 00:27:17.927 "avg_latency_us": 22254.637341605583, 00:27:17.927 "min_latency_us": 374.22545454545457, 00:27:17.927 "max_latency_us": 34317.03272727273 00:27:17.927 } 00:27:17.927 ], 00:27:17.927 "core_count": 1 00:27:17.927 } 00:27:17.927 10:18:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:27:17.927 [2024-12-09 10:18:48.699675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.927 [2024-12-09 10:18:48.700079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:17.927 [2024-12-09 10:18:48.700257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:17.927 [2024-12-09 10:18:48.700327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.927 [2024-12-09 10:18:48.700410] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:17.927 [2024-12-09 10:18:48.704437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.927 [2024-12-09 10:18:48.704585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:17.927 [2024-12-09 10:18:48.704720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.824 ms 00:27:17.927 [2024-12-09 10:18:48.704770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.927 [2024-12-09 10:18:48.706561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.927 [2024-12-09 10:18:48.706799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:17.927 [2024-12-09 10:18:48.706971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.711 ms 00:27:17.927 [2024-12-09 10:18:48.707034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.187 [2024-12-09 10:18:48.899232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.187 [2024-12-09 10:18:48.899550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:18.187 [2024-12-09 10:18:48.899709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 192.043 ms 00:27:18.187 [2024-12-09 10:18:48.899762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.187 [2024-12-09 10:18:48.906713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.187 [2024-12-09 10:18:48.906898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:18.187 [2024-12-09 10:18:48.907048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.842 ms 00:27:18.187 [2024-12-09 10:18:48.907104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.187 [2024-12-09 10:18:48.939956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.187 [2024-12-09 10:18:48.940253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:18.187 [2024-12-09 10:18:48.940398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.585 ms 00:27:18.187 [2024-12-09 10:18:48.940450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.187 [2024-12-09 10:18:48.960833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.187 [2024-12-09 10:18:48.960979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:18.187 [2024-12-09 10:18:48.961022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.303 ms 00:27:18.187 [2024-12-09 10:18:48.961035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.187 [2024-12-09 10:18:48.961239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.187 [2024-12-09 10:18:48.961261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:18.187 [2024-12-09 10:18:48.961282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:27:18.187 [2024-12-09 10:18:48.961294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.448 [2024-12-09 10:18:48.992817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.448 [2024-12-09 10:18:48.992900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:18.448 [2024-12-09 10:18:48.992948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.494 ms 00:27:18.448 [2024-12-09 10:18:48.992961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.448 [2024-12-09 10:18:49.024122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.448 [2024-12-09 10:18:49.024361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:18.448 [2024-12-09 10:18:49.024398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.109 ms 00:27:18.448 [2024-12-09 10:18:49.024412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.448 [2024-12-09 10:18:49.052517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.448 [2024-12-09 10:18:49.052556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:18.448 [2024-12-09 10:18:49.052593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.043 ms 00:27:18.448 [2024-12-09 10:18:49.052604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.448 [2024-12-09 10:18:49.082787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.448 [2024-12-09 10:18:49.082860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:18.448 [2024-12-09 10:18:49.082920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.073 ms 00:27:18.448 [2024-12-09 10:18:49.082939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.448 [2024-12-09 10:18:49.083046] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:18.448 [2024-12-09 10:18:49.083071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:18.448 [2024-12-09 10:18:49.083092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:18.448 [2024-12-09 10:18:49.083106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:18.448 [2024-12-09 10:18:49.083122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:18.448 [2024-12-09 10:18:49.083135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:18.448 [2024-12-09 10:18:49.083151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.083164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.083179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.083192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.083208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.083221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.083236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.083249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.083268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.083281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.083297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.083310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.083333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.083346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.083371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.083385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.083410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.083424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.083439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.083453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.083468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.083481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.083497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.083510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.083529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.083541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.083557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.083578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.083593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.083606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.083626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.083642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.083658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.083685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.083701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.083715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.083731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.083744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.083759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.083772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.083794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.083807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.083824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.083852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.083870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.083884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.083899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.083913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.083934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.083947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.083964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.083977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.083994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.084007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.084023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.084036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.084054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.084067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.084083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.084096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.084113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.084126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.084142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.084156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.084183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.084197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.084215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.084228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.084244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.084258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.084274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.084287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.084306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.084319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.084334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.084348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.084364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.084377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.084392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.084406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.084421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:18.449 [2024-12-09 10:18:49.084435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:18.450 [2024-12-09 10:18:49.084457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:18.450 [2024-12-09 10:18:49.084470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:18.450 [2024-12-09 10:18:49.084485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:18.450 [2024-12-09 10:18:49.084499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:18.450 [2024-12-09 10:18:49.084514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:18.450 [2024-12-09 10:18:49.084532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:18.450 [2024-12-09 10:18:49.084560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:18.450 [2024-12-09 10:18:49.084583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:18.450 [2024-12-09 10:18:49.084599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:18.450 [2024-12-09 10:18:49.084612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:18.450 [2024-12-09 10:18:49.084629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:18.450 [2024-12-09 10:18:49.084642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:18.450 [2024-12-09 10:18:49.084658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:18.450 [2024-12-09 10:18:49.084681] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:18.450 [2024-12-09 10:18:49.084697] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2b9ccff7-f8ac-49d0-866d-ef67adfbd97a 00:27:18.450 [2024-12-09 10:18:49.084714] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:18.450 [2024-12-09 10:18:49.084729] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:18.450 [2024-12-09 10:18:49.084742] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:18.450 [2024-12-09 10:18:49.084756] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:18.450 [2024-12-09 10:18:49.084768] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:18.450 [2024-12-09 10:18:49.084784] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:18.450 [2024-12-09 10:18:49.084797] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:18.450 [2024-12-09 10:18:49.084813] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:18.450 [2024-12-09 10:18:49.084835] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:18.450 [2024-12-09 10:18:49.084853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.450 [2024-12-09 10:18:49.084866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:18.450 [2024-12-09 10:18:49.084881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.821 ms 00:27:18.450 [2024-12-09 10:18:49.084893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.450 [2024-12-09 10:18:49.102833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.450 [2024-12-09 10:18:49.102922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:18.450 [2024-12-09 10:18:49.102962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.873 ms 00:27:18.450 [2024-12-09 10:18:49.102984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.450 [2024-12-09 10:18:49.103506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.450 [2024-12-09 10:18:49.103536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:18.450 [2024-12-09 10:18:49.103556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.489 ms 00:27:18.450 [2024-12-09 10:18:49.103570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.450 [2024-12-09 10:18:49.155029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:18.450 [2024-12-09 10:18:49.155291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:18.450 [2024-12-09 10:18:49.155346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:18.450 [2024-12-09 10:18:49.155361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.450 [2024-12-09 10:18:49.155468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:18.450 [2024-12-09 10:18:49.155485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:18.450 [2024-12-09 10:18:49.155501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:18.450 [2024-12-09 10:18:49.155513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.450 [2024-12-09 10:18:49.155693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:18.450 [2024-12-09 10:18:49.155715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:18.450 [2024-12-09 10:18:49.155733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:18.450 [2024-12-09 10:18:49.155746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.450 [2024-12-09 10:18:49.155775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:18.450 [2024-12-09 10:18:49.155806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:18.450 [2024-12-09 10:18:49.155835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:18.450 [2024-12-09 10:18:49.155847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.710 [2024-12-09 10:18:49.272296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:18.710 [2024-12-09 10:18:49.272359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:18.710 [2024-12-09 10:18:49.272388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:18.710 [2024-12-09 10:18:49.272401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.710 [2024-12-09 10:18:49.363279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:18.710 [2024-12-09 10:18:49.363366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:18.710 [2024-12-09 10:18:49.363408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:18.710 [2024-12-09 10:18:49.363421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.710 [2024-12-09 10:18:49.363589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:18.710 [2024-12-09 10:18:49.363627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:18.710 [2024-12-09 10:18:49.363644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:18.710 [2024-12-09 10:18:49.363657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.710 [2024-12-09 10:18:49.363731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:18.710 [2024-12-09 10:18:49.363752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:18.710 [2024-12-09 10:18:49.363769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:18.710 [2024-12-09 10:18:49.363781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.710 [2024-12-09 10:18:49.363936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:18.710 [2024-12-09 10:18:49.363961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:18.710 [2024-12-09 10:18:49.363982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:18.710 [2024-12-09 10:18:49.363994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.710 [2024-12-09 10:18:49.364090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:18.710 [2024-12-09 10:18:49.364110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:18.710 [2024-12-09 10:18:49.364125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:18.710 [2024-12-09 10:18:49.364137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.710 [2024-12-09 10:18:49.364189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:18.710 [2024-12-09 10:18:49.364208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:18.710 [2024-12-09 10:18:49.364224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:18.710 [2024-12-09 10:18:49.364249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.710 [2024-12-09 10:18:49.364321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:18.710 [2024-12-09 10:18:49.364339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:18.710 [2024-12-09 10:18:49.364354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:18.710 [2024-12-09 10:18:49.364366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.710 [2024-12-09 10:18:49.364568] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 664.847 ms, result 0 00:27:18.710 true 00:27:18.710 10:18:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 78370 00:27:18.710 10:18:49 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 78370 ']' 00:27:18.710 10:18:49 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 78370 00:27:18.710 10:18:49 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:27:18.710 10:18:49 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:18.710 10:18:49 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78370 00:27:18.710 killing process with pid 78370 00:27:18.710 Received shutdown signal, test time was about 4.000000 seconds 00:27:18.710 00:27:18.710 Latency(us) 00:27:18.710 [2024-12-09T10:18:49.507Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:18.710 [2024-12-09T10:18:49.507Z] =================================================================================================================== 00:27:18.710 [2024-12-09T10:18:49.507Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:18.710 10:18:49 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:18.710 10:18:49 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:18.710 10:18:49 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78370' 00:27:18.710 10:18:49 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 78370 00:27:18.710 10:18:49 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 78370 00:27:22.901 Remove shared memory files 00:27:22.901 10:18:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:27:22.901 10:18:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:27:22.901 10:18:53 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:22.901 10:18:53 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:27:22.901 10:18:53 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:27:22.901 10:18:53 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:27:22.901 10:18:53 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:22.901 10:18:53 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:27:22.901 ************************************ 00:27:22.901 END TEST ftl_bdevperf 00:27:22.901 ************************************ 00:27:22.901 00:27:22.901 real 0m26.470s 00:27:22.901 user 0m30.385s 00:27:22.901 sys 0m1.377s 00:27:22.901 10:18:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:22.901 10:18:53 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:22.901 10:18:53 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:27:22.901 10:18:53 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:22.901 10:18:53 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:22.901 10:18:53 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:22.901 ************************************ 00:27:22.901 START TEST ftl_trim 00:27:22.901 ************************************ 00:27:22.901 10:18:53 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:27:22.901 * Looking for test storage... 00:27:22.901 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:22.901 10:18:53 ftl.ftl_trim -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:22.901 10:18:53 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:22.901 10:18:53 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lcov --version 00:27:22.901 10:18:53 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:22.901 10:18:53 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:22.901 10:18:53 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:22.901 10:18:53 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:22.901 10:18:53 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:27:22.901 10:18:53 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:27:22.901 10:18:53 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:27:22.901 10:18:53 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:27:22.901 10:18:53 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:27:22.901 10:18:53 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:27:22.901 10:18:53 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:27:22.901 10:18:53 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:22.901 10:18:53 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:27:22.901 10:18:53 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:27:22.901 10:18:53 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:22.901 10:18:53 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:22.901 10:18:53 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:27:22.901 10:18:53 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:27:22.901 10:18:53 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:22.901 10:18:53 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:27:22.901 10:18:53 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:27:22.901 10:18:53 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:27:22.901 10:18:53 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:27:22.901 10:18:53 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:22.901 10:18:53 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:27:22.901 10:18:53 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:27:22.901 10:18:53 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:22.901 10:18:53 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:22.901 10:18:53 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:27:22.901 10:18:53 ftl.ftl_trim -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:22.901 10:18:53 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:22.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:22.901 --rc genhtml_branch_coverage=1 00:27:22.901 --rc genhtml_function_coverage=1 00:27:22.901 --rc genhtml_legend=1 00:27:22.901 --rc geninfo_all_blocks=1 00:27:22.901 --rc geninfo_unexecuted_blocks=1 00:27:22.901 00:27:22.901 ' 00:27:22.901 10:18:53 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:22.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:22.901 --rc genhtml_branch_coverage=1 00:27:22.901 --rc genhtml_function_coverage=1 00:27:22.901 --rc genhtml_legend=1 00:27:22.901 --rc geninfo_all_blocks=1 00:27:22.901 --rc geninfo_unexecuted_blocks=1 00:27:22.901 00:27:22.901 ' 00:27:22.901 10:18:53 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:22.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:22.901 --rc genhtml_branch_coverage=1 00:27:22.901 --rc genhtml_function_coverage=1 00:27:22.901 --rc genhtml_legend=1 00:27:22.901 --rc geninfo_all_blocks=1 00:27:22.901 --rc geninfo_unexecuted_blocks=1 00:27:22.901 00:27:22.901 ' 00:27:22.901 10:18:53 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:22.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:22.901 --rc genhtml_branch_coverage=1 00:27:22.901 --rc genhtml_function_coverage=1 00:27:22.901 --rc genhtml_legend=1 00:27:22.901 --rc geninfo_all_blocks=1 00:27:22.901 --rc geninfo_unexecuted_blocks=1 00:27:22.901 00:27:22.901 ' 00:27:22.901 10:18:53 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:22.901 10:18:53 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:27:22.901 10:18:53 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:22.901 10:18:53 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:22.901 10:18:53 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:22.901 10:18:53 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:22.901 10:18:53 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:22.901 10:18:53 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:22.901 10:18:53 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:22.901 10:18:53 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:22.901 10:18:53 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:22.901 10:18:53 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:22.901 10:18:53 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:22.901 10:18:53 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:22.901 10:18:53 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:22.901 10:18:53 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:22.901 10:18:53 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:22.901 10:18:53 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:22.901 10:18:53 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:22.901 10:18:53 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:22.901 10:18:53 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:22.901 10:18:53 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:22.901 10:18:53 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:22.901 10:18:53 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:22.901 10:18:53 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:22.901 10:18:53 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:22.901 10:18:53 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:22.901 10:18:53 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:22.901 10:18:53 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:22.901 10:18:53 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:22.901 10:18:53 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:27:22.901 10:18:53 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:27:22.901 10:18:53 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:27:22.901 10:18:53 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:27:22.901 10:18:53 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:27:22.901 10:18:53 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:27:22.901 10:18:53 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:27:22.901 10:18:53 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:27:22.901 10:18:53 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:22.901 10:18:53 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:22.901 10:18:53 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:27:22.902 10:18:53 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=78733 00:27:22.902 10:18:53 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:27:22.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:22.902 10:18:53 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 78733 00:27:22.902 10:18:53 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78733 ']' 00:27:22.902 10:18:53 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:22.902 10:18:53 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:22.902 10:18:53 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:22.902 10:18:53 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:22.902 10:18:53 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:27:22.902 [2024-12-09 10:18:53.572579] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:27:22.902 [2024-12-09 10:18:53.572786] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78733 ] 00:27:23.161 [2024-12-09 10:18:53.767806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:23.161 [2024-12-09 10:18:53.945419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:23.161 [2024-12-09 10:18:53.945562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:23.161 [2024-12-09 10:18:53.945597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:24.096 10:18:54 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:24.096 10:18:54 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:27:24.096 10:18:54 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:27:24.096 10:18:54 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:27:24.096 10:18:54 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:27:24.096 10:18:54 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:27:24.096 10:18:54 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:27:24.096 10:18:54 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:27:24.664 10:18:55 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:27:24.664 10:18:55 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:27:24.664 10:18:55 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:27:24.664 10:18:55 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:27:24.664 10:18:55 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:24.664 10:18:55 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:27:24.664 10:18:55 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:27:24.664 10:18:55 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:27:24.922 10:18:55 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:24.922 { 00:27:24.922 "name": "nvme0n1", 00:27:24.922 "aliases": [ 00:27:24.922 "5423868a-1b93-4afa-96be-f0ef5ac4ad5d" 00:27:24.922 ], 00:27:24.922 "product_name": "NVMe disk", 00:27:24.922 "block_size": 4096, 00:27:24.922 "num_blocks": 1310720, 00:27:24.922 "uuid": "5423868a-1b93-4afa-96be-f0ef5ac4ad5d", 00:27:24.922 "numa_id": -1, 00:27:24.922 "assigned_rate_limits": { 00:27:24.922 "rw_ios_per_sec": 0, 00:27:24.922 "rw_mbytes_per_sec": 0, 00:27:24.922 "r_mbytes_per_sec": 0, 00:27:24.922 "w_mbytes_per_sec": 0 00:27:24.922 }, 00:27:24.922 "claimed": true, 00:27:24.922 "claim_type": "read_many_write_one", 00:27:24.922 "zoned": false, 00:27:24.922 "supported_io_types": { 00:27:24.922 "read": true, 00:27:24.922 "write": true, 00:27:24.922 "unmap": true, 00:27:24.922 "flush": true, 00:27:24.922 "reset": true, 00:27:24.922 "nvme_admin": true, 00:27:24.922 "nvme_io": true, 00:27:24.922 "nvme_io_md": false, 00:27:24.922 "write_zeroes": true, 00:27:24.922 "zcopy": false, 00:27:24.922 "get_zone_info": false, 00:27:24.922 "zone_management": false, 00:27:24.922 "zone_append": false, 00:27:24.922 "compare": true, 00:27:24.922 "compare_and_write": false, 00:27:24.922 "abort": true, 00:27:24.922 "seek_hole": false, 00:27:24.922 "seek_data": false, 00:27:24.922 "copy": true, 00:27:24.922 "nvme_iov_md": false 00:27:24.922 }, 00:27:24.922 "driver_specific": { 00:27:24.922 "nvme": [ 00:27:24.922 { 00:27:24.922 "pci_address": "0000:00:11.0", 00:27:24.922 "trid": { 00:27:24.922 "trtype": "PCIe", 00:27:24.922 "traddr": "0000:00:11.0" 00:27:24.922 }, 00:27:24.922 "ctrlr_data": { 00:27:24.922 "cntlid": 0, 00:27:24.922 "vendor_id": "0x1b36", 00:27:24.922 "model_number": "QEMU NVMe Ctrl", 00:27:24.922 "serial_number": "12341", 00:27:24.922 "firmware_revision": "8.0.0", 00:27:24.922 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:24.922 "oacs": { 00:27:24.922 "security": 0, 00:27:24.922 "format": 1, 00:27:24.922 "firmware": 0, 00:27:24.922 "ns_manage": 1 00:27:24.922 }, 00:27:24.922 "multi_ctrlr": false, 00:27:24.922 "ana_reporting": false 00:27:24.922 }, 00:27:24.922 "vs": { 00:27:24.922 "nvme_version": "1.4" 00:27:24.922 }, 00:27:24.922 "ns_data": { 00:27:24.922 "id": 1, 00:27:24.922 "can_share": false 00:27:24.922 } 00:27:24.922 } 00:27:24.922 ], 00:27:24.922 "mp_policy": "active_passive" 00:27:24.922 } 00:27:24.922 } 00:27:24.922 ]' 00:27:24.922 10:18:55 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:24.922 10:18:55 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:27:24.922 10:18:55 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:24.922 10:18:55 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:27:24.922 10:18:55 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:27:24.922 10:18:55 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:27:24.922 10:18:55 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:27:24.922 10:18:55 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:27:24.922 10:18:55 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:27:24.922 10:18:55 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:24.922 10:18:55 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:25.487 10:18:56 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=2f74f35a-b68a-440b-970b-705901425827 00:27:25.487 10:18:56 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:27:25.487 10:18:56 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2f74f35a-b68a-440b-970b-705901425827 00:27:25.744 10:18:56 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:27:26.003 10:18:56 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=db7fa2c6-7017-4ef1-b032-86775a359c21 00:27:26.003 10:18:56 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u db7fa2c6-7017-4ef1-b032-86775a359c21 00:27:26.261 10:18:56 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=5506e51f-1972-4596-8b90-38a7a8826a80 00:27:26.262 10:18:56 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 5506e51f-1972-4596-8b90-38a7a8826a80 00:27:26.262 10:18:56 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:27:26.262 10:18:56 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:26.262 10:18:56 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=5506e51f-1972-4596-8b90-38a7a8826a80 00:27:26.262 10:18:56 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:27:26.262 10:18:56 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 5506e51f-1972-4596-8b90-38a7a8826a80 00:27:26.262 10:18:56 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=5506e51f-1972-4596-8b90-38a7a8826a80 00:27:26.262 10:18:56 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:26.262 10:18:56 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:27:26.262 10:18:56 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:27:26.262 10:18:56 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5506e51f-1972-4596-8b90-38a7a8826a80 00:27:26.521 10:18:57 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:26.521 { 00:27:26.521 "name": "5506e51f-1972-4596-8b90-38a7a8826a80", 00:27:26.521 "aliases": [ 00:27:26.521 "lvs/nvme0n1p0" 00:27:26.521 ], 00:27:26.521 "product_name": "Logical Volume", 00:27:26.521 "block_size": 4096, 00:27:26.521 "num_blocks": 26476544, 00:27:26.521 "uuid": "5506e51f-1972-4596-8b90-38a7a8826a80", 00:27:26.521 "assigned_rate_limits": { 00:27:26.521 "rw_ios_per_sec": 0, 00:27:26.521 "rw_mbytes_per_sec": 0, 00:27:26.521 "r_mbytes_per_sec": 0, 00:27:26.521 "w_mbytes_per_sec": 0 00:27:26.521 }, 00:27:26.521 "claimed": false, 00:27:26.521 "zoned": false, 00:27:26.521 "supported_io_types": { 00:27:26.521 "read": true, 00:27:26.521 "write": true, 00:27:26.521 "unmap": true, 00:27:26.521 "flush": false, 00:27:26.521 "reset": true, 00:27:26.521 "nvme_admin": false, 00:27:26.521 "nvme_io": false, 00:27:26.521 "nvme_io_md": false, 00:27:26.521 "write_zeroes": true, 00:27:26.521 "zcopy": false, 00:27:26.521 "get_zone_info": false, 00:27:26.521 "zone_management": false, 00:27:26.521 "zone_append": false, 00:27:26.521 "compare": false, 00:27:26.521 "compare_and_write": false, 00:27:26.521 "abort": false, 00:27:26.521 "seek_hole": true, 00:27:26.521 "seek_data": true, 00:27:26.521 "copy": false, 00:27:26.521 "nvme_iov_md": false 00:27:26.521 }, 00:27:26.521 "driver_specific": { 00:27:26.521 "lvol": { 00:27:26.521 "lvol_store_uuid": "db7fa2c6-7017-4ef1-b032-86775a359c21", 00:27:26.521 "base_bdev": "nvme0n1", 00:27:26.521 "thin_provision": true, 00:27:26.521 "num_allocated_clusters": 0, 00:27:26.521 "snapshot": false, 00:27:26.521 "clone": false, 00:27:26.521 "esnap_clone": false 00:27:26.521 } 00:27:26.521 } 00:27:26.521 } 00:27:26.521 ]' 00:27:26.521 10:18:57 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:26.521 10:18:57 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:27:26.521 10:18:57 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:26.521 10:18:57 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:26.521 10:18:57 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:26.521 10:18:57 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:27:26.521 10:18:57 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:27:26.521 10:18:57 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:27:26.521 10:18:57 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:27:27.089 10:18:57 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:27:27.089 10:18:57 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:27:27.089 10:18:57 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 5506e51f-1972-4596-8b90-38a7a8826a80 00:27:27.089 10:18:57 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=5506e51f-1972-4596-8b90-38a7a8826a80 00:27:27.089 10:18:57 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:27.089 10:18:57 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:27:27.089 10:18:57 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:27:27.089 10:18:57 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5506e51f-1972-4596-8b90-38a7a8826a80 00:27:27.089 10:18:57 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:27.089 { 00:27:27.089 "name": "5506e51f-1972-4596-8b90-38a7a8826a80", 00:27:27.089 "aliases": [ 00:27:27.089 "lvs/nvme0n1p0" 00:27:27.089 ], 00:27:27.089 "product_name": "Logical Volume", 00:27:27.089 "block_size": 4096, 00:27:27.089 "num_blocks": 26476544, 00:27:27.089 "uuid": "5506e51f-1972-4596-8b90-38a7a8826a80", 00:27:27.089 "assigned_rate_limits": { 00:27:27.089 "rw_ios_per_sec": 0, 00:27:27.089 "rw_mbytes_per_sec": 0, 00:27:27.089 "r_mbytes_per_sec": 0, 00:27:27.089 "w_mbytes_per_sec": 0 00:27:27.089 }, 00:27:27.089 "claimed": false, 00:27:27.089 "zoned": false, 00:27:27.089 "supported_io_types": { 00:27:27.089 "read": true, 00:27:27.089 "write": true, 00:27:27.089 "unmap": true, 00:27:27.089 "flush": false, 00:27:27.089 "reset": true, 00:27:27.089 "nvme_admin": false, 00:27:27.089 "nvme_io": false, 00:27:27.089 "nvme_io_md": false, 00:27:27.089 "write_zeroes": true, 00:27:27.089 "zcopy": false, 00:27:27.089 "get_zone_info": false, 00:27:27.089 "zone_management": false, 00:27:27.089 "zone_append": false, 00:27:27.089 "compare": false, 00:27:27.089 "compare_and_write": false, 00:27:27.089 "abort": false, 00:27:27.089 "seek_hole": true, 00:27:27.089 "seek_data": true, 00:27:27.089 "copy": false, 00:27:27.089 "nvme_iov_md": false 00:27:27.089 }, 00:27:27.089 "driver_specific": { 00:27:27.089 "lvol": { 00:27:27.089 "lvol_store_uuid": "db7fa2c6-7017-4ef1-b032-86775a359c21", 00:27:27.089 "base_bdev": "nvme0n1", 00:27:27.089 "thin_provision": true, 00:27:27.089 "num_allocated_clusters": 0, 00:27:27.089 "snapshot": false, 00:27:27.089 "clone": false, 00:27:27.089 "esnap_clone": false 00:27:27.089 } 00:27:27.089 } 00:27:27.089 } 00:27:27.089 ]' 00:27:27.089 10:18:57 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:27.089 10:18:57 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:27:27.089 10:18:57 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:27.348 10:18:57 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:27.348 10:18:57 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:27.348 10:18:57 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:27:27.348 10:18:57 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:27:27.348 10:18:57 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:27:27.607 10:18:58 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:27:27.607 10:18:58 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:27:27.607 10:18:58 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 5506e51f-1972-4596-8b90-38a7a8826a80 00:27:27.607 10:18:58 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=5506e51f-1972-4596-8b90-38a7a8826a80 00:27:27.607 10:18:58 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:27.607 10:18:58 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:27:27.607 10:18:58 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:27:27.607 10:18:58 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5506e51f-1972-4596-8b90-38a7a8826a80 00:27:27.867 10:18:58 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:27.867 { 00:27:27.867 "name": "5506e51f-1972-4596-8b90-38a7a8826a80", 00:27:27.867 "aliases": [ 00:27:27.867 "lvs/nvme0n1p0" 00:27:27.867 ], 00:27:27.867 "product_name": "Logical Volume", 00:27:27.867 "block_size": 4096, 00:27:27.867 "num_blocks": 26476544, 00:27:27.867 "uuid": "5506e51f-1972-4596-8b90-38a7a8826a80", 00:27:27.867 "assigned_rate_limits": { 00:27:27.867 "rw_ios_per_sec": 0, 00:27:27.867 "rw_mbytes_per_sec": 0, 00:27:27.867 "r_mbytes_per_sec": 0, 00:27:27.867 "w_mbytes_per_sec": 0 00:27:27.867 }, 00:27:27.867 "claimed": false, 00:27:27.867 "zoned": false, 00:27:27.867 "supported_io_types": { 00:27:27.867 "read": true, 00:27:27.867 "write": true, 00:27:27.867 "unmap": true, 00:27:27.867 "flush": false, 00:27:27.867 "reset": true, 00:27:27.867 "nvme_admin": false, 00:27:27.867 "nvme_io": false, 00:27:27.867 "nvme_io_md": false, 00:27:27.867 "write_zeroes": true, 00:27:27.867 "zcopy": false, 00:27:27.867 "get_zone_info": false, 00:27:27.867 "zone_management": false, 00:27:27.867 "zone_append": false, 00:27:27.867 "compare": false, 00:27:27.867 "compare_and_write": false, 00:27:27.867 "abort": false, 00:27:27.867 "seek_hole": true, 00:27:27.867 "seek_data": true, 00:27:27.867 "copy": false, 00:27:27.867 "nvme_iov_md": false 00:27:27.867 }, 00:27:27.867 "driver_specific": { 00:27:27.867 "lvol": { 00:27:27.867 "lvol_store_uuid": "db7fa2c6-7017-4ef1-b032-86775a359c21", 00:27:27.867 "base_bdev": "nvme0n1", 00:27:27.867 "thin_provision": true, 00:27:27.867 "num_allocated_clusters": 0, 00:27:27.867 "snapshot": false, 00:27:27.867 "clone": false, 00:27:27.867 "esnap_clone": false 00:27:27.867 } 00:27:27.867 } 00:27:27.867 } 00:27:27.867 ]' 00:27:27.867 10:18:58 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:27.867 10:18:58 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:27:27.867 10:18:58 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:27.867 10:18:58 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:27.867 10:18:58 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:27.867 10:18:58 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:27:27.867 10:18:58 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:27:27.867 10:18:58 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 5506e51f-1972-4596-8b90-38a7a8826a80 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:27:28.433 [2024-12-09 10:18:58.945906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.433 [2024-12-09 10:18:58.945983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:28.433 [2024-12-09 10:18:58.946025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:28.433 [2024-12-09 10:18:58.946039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.433 [2024-12-09 10:18:58.950020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.433 [2024-12-09 10:18:58.950067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:28.433 [2024-12-09 10:18:58.950123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.927 ms 00:27:28.433 [2024-12-09 10:18:58.950143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.433 [2024-12-09 10:18:58.950337] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:28.433 [2024-12-09 10:18:58.951360] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:28.433 [2024-12-09 10:18:58.951409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.433 [2024-12-09 10:18:58.951427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:28.434 [2024-12-09 10:18:58.951442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.085 ms 00:27:28.434 [2024-12-09 10:18:58.951454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.434 [2024-12-09 10:18:58.951702] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 1416d288-a52a-4793-951a-3821cfb97ba2 00:27:28.434 [2024-12-09 10:18:58.953656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.434 [2024-12-09 10:18:58.953702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:27:28.434 [2024-12-09 10:18:58.953729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:27:28.434 [2024-12-09 10:18:58.953744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.434 [2024-12-09 10:18:58.964768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.434 [2024-12-09 10:18:58.964874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:28.434 [2024-12-09 10:18:58.964902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.899 ms 00:27:28.434 [2024-12-09 10:18:58.964919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.434 [2024-12-09 10:18:58.965213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.434 [2024-12-09 10:18:58.965241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:28.434 [2024-12-09 10:18:58.965257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:27:28.434 [2024-12-09 10:18:58.965283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.434 [2024-12-09 10:18:58.965343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.434 [2024-12-09 10:18:58.965363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:28.434 [2024-12-09 10:18:58.965376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:27:28.434 [2024-12-09 10:18:58.965395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.434 [2024-12-09 10:18:58.965446] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:27:28.434 [2024-12-09 10:18:58.971148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.434 [2024-12-09 10:18:58.971364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:28.434 [2024-12-09 10:18:58.971404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.712 ms 00:27:28.434 [2024-12-09 10:18:58.971418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.434 [2024-12-09 10:18:58.971520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.434 [2024-12-09 10:18:58.971563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:28.434 [2024-12-09 10:18:58.971581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:27:28.434 [2024-12-09 10:18:58.971594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.434 [2024-12-09 10:18:58.971642] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:27:28.434 [2024-12-09 10:18:58.971816] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:28.434 [2024-12-09 10:18:58.971872] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:28.434 [2024-12-09 10:18:58.971892] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:28.434 [2024-12-09 10:18:58.971911] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:28.434 [2024-12-09 10:18:58.971925] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:28.434 [2024-12-09 10:18:58.971947] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:27:28.434 [2024-12-09 10:18:58.971960] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:28.434 [2024-12-09 10:18:58.971977] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:28.434 [2024-12-09 10:18:58.972003] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:28.434 [2024-12-09 10:18:58.972018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.434 [2024-12-09 10:18:58.972030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:28.434 [2024-12-09 10:18:58.972046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.378 ms 00:27:28.434 [2024-12-09 10:18:58.972058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.434 [2024-12-09 10:18:58.972205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.434 [2024-12-09 10:18:58.972228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:28.434 [2024-12-09 10:18:58.972244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:27:28.434 [2024-12-09 10:18:58.972255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.434 [2024-12-09 10:18:58.972408] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:28.434 [2024-12-09 10:18:58.972433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:28.434 [2024-12-09 10:18:58.972450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:28.434 [2024-12-09 10:18:58.972462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:28.434 [2024-12-09 10:18:58.972477] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:28.434 [2024-12-09 10:18:58.972487] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:28.434 [2024-12-09 10:18:58.972501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:27:28.434 [2024-12-09 10:18:58.972512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:28.434 [2024-12-09 10:18:58.972526] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:27:28.434 [2024-12-09 10:18:58.972536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:28.434 [2024-12-09 10:18:58.972555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:28.434 [2024-12-09 10:18:58.972566] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:27:28.434 [2024-12-09 10:18:58.972581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:28.434 [2024-12-09 10:18:58.972592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:28.434 [2024-12-09 10:18:58.972606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:27:28.434 [2024-12-09 10:18:58.972616] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:28.434 [2024-12-09 10:18:58.972632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:28.434 [2024-12-09 10:18:58.972643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:27:28.434 [2024-12-09 10:18:58.972656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:28.434 [2024-12-09 10:18:58.972667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:28.434 [2024-12-09 10:18:58.972680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:27:28.434 [2024-12-09 10:18:58.972690] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:28.434 [2024-12-09 10:18:58.972711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:28.434 [2024-12-09 10:18:58.972721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:27:28.434 [2024-12-09 10:18:58.972733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:28.434 [2024-12-09 10:18:58.972744] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:28.434 [2024-12-09 10:18:58.972758] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:27:28.434 [2024-12-09 10:18:58.972768] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:28.434 [2024-12-09 10:18:58.972783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:28.434 [2024-12-09 10:18:58.972794] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:27:28.434 [2024-12-09 10:18:58.972808] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:28.434 [2024-12-09 10:18:58.972819] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:28.434 [2024-12-09 10:18:58.972851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:27:28.434 [2024-12-09 10:18:58.972863] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:28.434 [2024-12-09 10:18:58.972877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:28.434 [2024-12-09 10:18:58.972889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:27:28.434 [2024-12-09 10:18:58.972902] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:28.434 [2024-12-09 10:18:58.972914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:28.434 [2024-12-09 10:18:58.972937] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:27:28.434 [2024-12-09 10:18:58.972948] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:28.434 [2024-12-09 10:18:58.972962] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:28.434 [2024-12-09 10:18:58.972972] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:27:28.434 [2024-12-09 10:18:58.972986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:28.434 [2024-12-09 10:18:58.972998] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:28.434 [2024-12-09 10:18:58.973013] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:28.434 [2024-12-09 10:18:58.973025] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:28.434 [2024-12-09 10:18:58.973051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:28.434 [2024-12-09 10:18:58.973064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:28.434 [2024-12-09 10:18:58.973080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:28.434 [2024-12-09 10:18:58.973092] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:28.434 [2024-12-09 10:18:58.973106] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:28.434 [2024-12-09 10:18:58.973117] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:28.434 [2024-12-09 10:18:58.973141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:28.434 [2024-12-09 10:18:58.973164] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:28.434 [2024-12-09 10:18:58.973182] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:28.434 [2024-12-09 10:18:58.973198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:27:28.434 [2024-12-09 10:18:58.973213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:27:28.435 [2024-12-09 10:18:58.973225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:27:28.435 [2024-12-09 10:18:58.973240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:27:28.435 [2024-12-09 10:18:58.973252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:27:28.435 [2024-12-09 10:18:58.973269] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:27:28.435 [2024-12-09 10:18:58.973281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:27:28.435 [2024-12-09 10:18:58.973296] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:27:28.435 [2024-12-09 10:18:58.973308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:27:28.435 [2024-12-09 10:18:58.973327] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:27:28.435 [2024-12-09 10:18:58.973339] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:27:28.435 [2024-12-09 10:18:58.973354] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:27:28.435 [2024-12-09 10:18:58.973367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:27:28.435 [2024-12-09 10:18:58.973387] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:27:28.435 [2024-12-09 10:18:58.973399] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:28.435 [2024-12-09 10:18:58.973419] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:28.435 [2024-12-09 10:18:58.973432] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:28.435 [2024-12-09 10:18:58.973447] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:28.435 [2024-12-09 10:18:58.973459] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:28.435 [2024-12-09 10:18:58.973474] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:28.435 [2024-12-09 10:18:58.973488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.435 [2024-12-09 10:18:58.973502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:28.435 [2024-12-09 10:18:58.973515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.152 ms 00:27:28.435 [2024-12-09 10:18:58.973529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.435 [2024-12-09 10:18:58.973622] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:27:28.435 [2024-12-09 10:18:58.973655] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:27:30.965 [2024-12-09 10:19:01.690389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.965 [2024-12-09 10:19:01.690528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:27:30.965 [2024-12-09 10:19:01.690554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2716.777 ms 00:27:30.965 [2024-12-09 10:19:01.690572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.965 [2024-12-09 10:19:01.734954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.965 [2024-12-09 10:19:01.735055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:30.965 [2024-12-09 10:19:01.735079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.926 ms 00:27:30.965 [2024-12-09 10:19:01.735096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.965 [2024-12-09 10:19:01.735337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.965 [2024-12-09 10:19:01.735364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:30.965 [2024-12-09 10:19:01.735408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:27:30.965 [2024-12-09 10:19:01.735428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.224 [2024-12-09 10:19:01.794442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.224 [2024-12-09 10:19:01.794539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:31.224 [2024-12-09 10:19:01.794564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.958 ms 00:27:31.224 [2024-12-09 10:19:01.794583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.224 [2024-12-09 10:19:01.794761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.224 [2024-12-09 10:19:01.794789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:31.224 [2024-12-09 10:19:01.794805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:31.224 [2024-12-09 10:19:01.794821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.224 [2024-12-09 10:19:01.795455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.224 [2024-12-09 10:19:01.795500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:31.224 [2024-12-09 10:19:01.795517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.564 ms 00:27:31.224 [2024-12-09 10:19:01.795532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.224 [2024-12-09 10:19:01.795712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.224 [2024-12-09 10:19:01.795732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:31.224 [2024-12-09 10:19:01.795771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:27:31.224 [2024-12-09 10:19:01.795790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.224 [2024-12-09 10:19:01.817594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.224 [2024-12-09 10:19:01.817678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:31.224 [2024-12-09 10:19:01.817702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.747 ms 00:27:31.224 [2024-12-09 10:19:01.817718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.224 [2024-12-09 10:19:01.832560] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:27:31.224 [2024-12-09 10:19:01.854267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.224 [2024-12-09 10:19:01.854360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:31.224 [2024-12-09 10:19:01.854391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.307 ms 00:27:31.224 [2024-12-09 10:19:01.854404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.224 [2024-12-09 10:19:01.930467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.224 [2024-12-09 10:19:01.930587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:27:31.224 [2024-12-09 10:19:01.930625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.889 ms 00:27:31.224 [2024-12-09 10:19:01.930639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.224 [2024-12-09 10:19:01.930987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.224 [2024-12-09 10:19:01.931012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:31.224 [2024-12-09 10:19:01.931034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:27:31.224 [2024-12-09 10:19:01.931047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.224 [2024-12-09 10:19:01.962331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.224 [2024-12-09 10:19:01.962409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:27:31.224 [2024-12-09 10:19:01.962436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.227 ms 00:27:31.224 [2024-12-09 10:19:01.962455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.224 [2024-12-09 10:19:01.993243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.224 [2024-12-09 10:19:01.993342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:27:31.224 [2024-12-09 10:19:01.993371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.639 ms 00:27:31.224 [2024-12-09 10:19:01.993384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.224 [2024-12-09 10:19:01.994422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.224 [2024-12-09 10:19:01.994470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:31.224 [2024-12-09 10:19:01.994492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.898 ms 00:27:31.224 [2024-12-09 10:19:01.994504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.483 [2024-12-09 10:19:02.080705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.483 [2024-12-09 10:19:02.081104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:27:31.483 [2024-12-09 10:19:02.081152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.136 ms 00:27:31.483 [2024-12-09 10:19:02.081167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.483 [2024-12-09 10:19:02.115547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.483 [2024-12-09 10:19:02.115646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:27:31.483 [2024-12-09 10:19:02.115675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.159 ms 00:27:31.483 [2024-12-09 10:19:02.115692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.483 [2024-12-09 10:19:02.150182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.483 [2024-12-09 10:19:02.150271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:27:31.483 [2024-12-09 10:19:02.150298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.271 ms 00:27:31.483 [2024-12-09 10:19:02.150311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.483 [2024-12-09 10:19:02.182771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.483 [2024-12-09 10:19:02.182881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:31.483 [2024-12-09 10:19:02.182909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.286 ms 00:27:31.483 [2024-12-09 10:19:02.182922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.483 [2024-12-09 10:19:02.183092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.483 [2024-12-09 10:19:02.183114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:31.483 [2024-12-09 10:19:02.183136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:27:31.483 [2024-12-09 10:19:02.183148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.483 [2024-12-09 10:19:02.183256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.483 [2024-12-09 10:19:02.183274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:31.483 [2024-12-09 10:19:02.183290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:27:31.483 [2024-12-09 10:19:02.183302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.483 [2024-12-09 10:19:02.184603] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:31.483 [2024-12-09 10:19:02.189329] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3238.385 ms, result 0 00:27:31.483 [2024-12-09 10:19:02.190396] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:31.483 { 00:27:31.483 "name": "ftl0", 00:27:31.483 "uuid": "1416d288-a52a-4793-951a-3821cfb97ba2" 00:27:31.483 } 00:27:31.483 10:19:02 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:27:31.483 10:19:02 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:27:31.483 10:19:02 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:27:31.483 10:19:02 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:27:31.483 10:19:02 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:27:31.483 10:19:02 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:27:31.483 10:19:02 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:27:32.050 10:19:02 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:27:32.050 [ 00:27:32.050 { 00:27:32.050 "name": "ftl0", 00:27:32.050 "aliases": [ 00:27:32.050 "1416d288-a52a-4793-951a-3821cfb97ba2" 00:27:32.050 ], 00:27:32.050 "product_name": "FTL disk", 00:27:32.050 "block_size": 4096, 00:27:32.050 "num_blocks": 23592960, 00:27:32.050 "uuid": "1416d288-a52a-4793-951a-3821cfb97ba2", 00:27:32.050 "assigned_rate_limits": { 00:27:32.050 "rw_ios_per_sec": 0, 00:27:32.050 "rw_mbytes_per_sec": 0, 00:27:32.050 "r_mbytes_per_sec": 0, 00:27:32.050 "w_mbytes_per_sec": 0 00:27:32.050 }, 00:27:32.050 "claimed": false, 00:27:32.050 "zoned": false, 00:27:32.050 "supported_io_types": { 00:27:32.050 "read": true, 00:27:32.050 "write": true, 00:27:32.050 "unmap": true, 00:27:32.050 "flush": true, 00:27:32.050 "reset": false, 00:27:32.050 "nvme_admin": false, 00:27:32.050 "nvme_io": false, 00:27:32.050 "nvme_io_md": false, 00:27:32.050 "write_zeroes": true, 00:27:32.050 "zcopy": false, 00:27:32.050 "get_zone_info": false, 00:27:32.050 "zone_management": false, 00:27:32.050 "zone_append": false, 00:27:32.050 "compare": false, 00:27:32.050 "compare_and_write": false, 00:27:32.050 "abort": false, 00:27:32.050 "seek_hole": false, 00:27:32.050 "seek_data": false, 00:27:32.050 "copy": false, 00:27:32.050 "nvme_iov_md": false 00:27:32.050 }, 00:27:32.050 "driver_specific": { 00:27:32.050 "ftl": { 00:27:32.050 "base_bdev": "5506e51f-1972-4596-8b90-38a7a8826a80", 00:27:32.050 "cache": "nvc0n1p0" 00:27:32.050 } 00:27:32.050 } 00:27:32.050 } 00:27:32.050 ] 00:27:32.308 10:19:02 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:27:32.308 10:19:02 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:27:32.308 10:19:02 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:27:32.567 10:19:03 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:27:32.567 10:19:03 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:27:32.826 10:19:03 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:27:32.826 { 00:27:32.826 "name": "ftl0", 00:27:32.826 "aliases": [ 00:27:32.826 "1416d288-a52a-4793-951a-3821cfb97ba2" 00:27:32.826 ], 00:27:32.826 "product_name": "FTL disk", 00:27:32.826 "block_size": 4096, 00:27:32.826 "num_blocks": 23592960, 00:27:32.826 "uuid": "1416d288-a52a-4793-951a-3821cfb97ba2", 00:27:32.826 "assigned_rate_limits": { 00:27:32.826 "rw_ios_per_sec": 0, 00:27:32.826 "rw_mbytes_per_sec": 0, 00:27:32.826 "r_mbytes_per_sec": 0, 00:27:32.826 "w_mbytes_per_sec": 0 00:27:32.826 }, 00:27:32.826 "claimed": false, 00:27:32.826 "zoned": false, 00:27:32.826 "supported_io_types": { 00:27:32.826 "read": true, 00:27:32.826 "write": true, 00:27:32.826 "unmap": true, 00:27:32.826 "flush": true, 00:27:32.826 "reset": false, 00:27:32.826 "nvme_admin": false, 00:27:32.826 "nvme_io": false, 00:27:32.826 "nvme_io_md": false, 00:27:32.826 "write_zeroes": true, 00:27:32.826 "zcopy": false, 00:27:32.826 "get_zone_info": false, 00:27:32.826 "zone_management": false, 00:27:32.826 "zone_append": false, 00:27:32.826 "compare": false, 00:27:32.826 "compare_and_write": false, 00:27:32.826 "abort": false, 00:27:32.826 "seek_hole": false, 00:27:32.826 "seek_data": false, 00:27:32.826 "copy": false, 00:27:32.826 "nvme_iov_md": false 00:27:32.826 }, 00:27:32.826 "driver_specific": { 00:27:32.826 "ftl": { 00:27:32.826 "base_bdev": "5506e51f-1972-4596-8b90-38a7a8826a80", 00:27:32.826 "cache": "nvc0n1p0" 00:27:32.826 } 00:27:32.826 } 00:27:32.826 } 00:27:32.826 ]' 00:27:32.826 10:19:03 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:27:32.826 10:19:03 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:27:32.826 10:19:03 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:27:33.084 [2024-12-09 10:19:03.752923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.084 [2024-12-09 10:19:03.753250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:33.084 [2024-12-09 10:19:03.753290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:33.084 [2024-12-09 10:19:03.753307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.084 [2024-12-09 10:19:03.753379] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:27:33.084 [2024-12-09 10:19:03.757103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.084 [2024-12-09 10:19:03.757142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:33.084 [2024-12-09 10:19:03.757169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.691 ms 00:27:33.084 [2024-12-09 10:19:03.757182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.084 [2024-12-09 10:19:03.758019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.084 [2024-12-09 10:19:03.758057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:33.084 [2024-12-09 10:19:03.758089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.720 ms 00:27:33.084 [2024-12-09 10:19:03.758104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.084 [2024-12-09 10:19:03.761709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.085 [2024-12-09 10:19:03.761741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:33.085 [2024-12-09 10:19:03.761760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.549 ms 00:27:33.085 [2024-12-09 10:19:03.761785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.085 [2024-12-09 10:19:03.769175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.085 [2024-12-09 10:19:03.769223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:33.085 [2024-12-09 10:19:03.769243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.305 ms 00:27:33.085 [2024-12-09 10:19:03.769256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.085 [2024-12-09 10:19:03.802234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.085 [2024-12-09 10:19:03.802328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:33.085 [2024-12-09 10:19:03.802359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.816 ms 00:27:33.085 [2024-12-09 10:19:03.802372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.085 [2024-12-09 10:19:03.822534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.085 [2024-12-09 10:19:03.822656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:33.085 [2024-12-09 10:19:03.822691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.961 ms 00:27:33.085 [2024-12-09 10:19:03.822705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.085 [2024-12-09 10:19:03.823118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.085 [2024-12-09 10:19:03.823143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:33.085 [2024-12-09 10:19:03.823161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:27:33.085 [2024-12-09 10:19:03.823174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.085 [2024-12-09 10:19:03.857395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.085 [2024-12-09 10:19:03.857489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:33.085 [2024-12-09 10:19:03.857517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.153 ms 00:27:33.085 [2024-12-09 10:19:03.857536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.345 [2024-12-09 10:19:03.890770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.345 [2024-12-09 10:19:03.890873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:33.345 [2024-12-09 10:19:03.890905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.037 ms 00:27:33.345 [2024-12-09 10:19:03.890919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.345 [2024-12-09 10:19:03.922813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.345 [2024-12-09 10:19:03.922913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:33.345 [2024-12-09 10:19:03.922941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.704 ms 00:27:33.345 [2024-12-09 10:19:03.922954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.345 [2024-12-09 10:19:03.955094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.345 [2024-12-09 10:19:03.955192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:33.345 [2024-12-09 10:19:03.955232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.854 ms 00:27:33.345 [2024-12-09 10:19:03.955246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.345 [2024-12-09 10:19:03.955413] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:33.345 [2024-12-09 10:19:03.955444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:33.345 [2024-12-09 10:19:03.955465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:33.345 [2024-12-09 10:19:03.955480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:33.345 [2024-12-09 10:19:03.955496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:33.345 [2024-12-09 10:19:03.955510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:33.345 [2024-12-09 10:19:03.955531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:33.345 [2024-12-09 10:19:03.955545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:33.345 [2024-12-09 10:19:03.955561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:33.345 [2024-12-09 10:19:03.955578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:33.345 [2024-12-09 10:19:03.955593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:33.345 [2024-12-09 10:19:03.955607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:33.345 [2024-12-09 10:19:03.955625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:33.345 [2024-12-09 10:19:03.955638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:33.345 [2024-12-09 10:19:03.955654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:33.345 [2024-12-09 10:19:03.955668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:33.345 [2024-12-09 10:19:03.955684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:33.345 [2024-12-09 10:19:03.955698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:33.345 [2024-12-09 10:19:03.955713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:33.345 [2024-12-09 10:19:03.955726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:33.345 [2024-12-09 10:19:03.955774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:33.345 [2024-12-09 10:19:03.955788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:33.345 [2024-12-09 10:19:03.955806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:33.345 [2024-12-09 10:19:03.955820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:33.345 [2024-12-09 10:19:03.955856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:33.345 [2024-12-09 10:19:03.955871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:33.345 [2024-12-09 10:19:03.955887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:33.345 [2024-12-09 10:19:03.955914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:33.345 [2024-12-09 10:19:03.955931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:33.345 [2024-12-09 10:19:03.955945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:33.345 [2024-12-09 10:19:03.955961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:33.345 [2024-12-09 10:19:03.955974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:33.345 [2024-12-09 10:19:03.955989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:33.345 [2024-12-09 10:19:03.956002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:33.345 [2024-12-09 10:19:03.956019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:33.345 [2024-12-09 10:19:03.956033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:33.345 [2024-12-09 10:19:03.956049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:33.345 [2024-12-09 10:19:03.956062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:33.345 [2024-12-09 10:19:03.956080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:33.345 [2024-12-09 10:19:03.956093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:33.345 [2024-12-09 10:19:03.956109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:33.345 [2024-12-09 10:19:03.956122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:33.345 [2024-12-09 10:19:03.956137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:33.345 [2024-12-09 10:19:03.956155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:33.345 [2024-12-09 10:19:03.956179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:33.345 [2024-12-09 10:19:03.956192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:33.345 [2024-12-09 10:19:03.956219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:33.345 [2024-12-09 10:19:03.956232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:33.345 [2024-12-09 10:19:03.956247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:33.345 [2024-12-09 10:19:03.956260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:33.345 [2024-12-09 10:19:03.956276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:33.345 [2024-12-09 10:19:03.956288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:33.345 [2024-12-09 10:19:03.956306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:33.345 [2024-12-09 10:19:03.956319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:33.345 [2024-12-09 10:19:03.956337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:33.345 [2024-12-09 10:19:03.956351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:33.345 [2024-12-09 10:19:03.956366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:33.345 [2024-12-09 10:19:03.956379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:33.345 [2024-12-09 10:19:03.956394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:33.345 [2024-12-09 10:19:03.956408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:33.345 [2024-12-09 10:19:03.956424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:33.345 [2024-12-09 10:19:03.956437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:33.346 [2024-12-09 10:19:03.956452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:33.346 [2024-12-09 10:19:03.956465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:33.346 [2024-12-09 10:19:03.956481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:33.346 [2024-12-09 10:19:03.956494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:33.346 [2024-12-09 10:19:03.956510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:33.346 [2024-12-09 10:19:03.956524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:33.346 [2024-12-09 10:19:03.956539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:33.346 [2024-12-09 10:19:03.956552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:33.346 [2024-12-09 10:19:03.956572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:33.346 [2024-12-09 10:19:03.956585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:33.346 [2024-12-09 10:19:03.956601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:33.346 [2024-12-09 10:19:03.956613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:33.346 [2024-12-09 10:19:03.956629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:33.346 [2024-12-09 10:19:03.956642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:33.346 [2024-12-09 10:19:03.956658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:33.346 [2024-12-09 10:19:03.956671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:33.346 [2024-12-09 10:19:03.956686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:33.346 [2024-12-09 10:19:03.956699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:33.346 [2024-12-09 10:19:03.956724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:33.346 [2024-12-09 10:19:03.956737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:33.346 [2024-12-09 10:19:03.956753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:33.346 [2024-12-09 10:19:03.956766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:33.346 [2024-12-09 10:19:03.956782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:33.346 [2024-12-09 10:19:03.956795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:33.346 [2024-12-09 10:19:03.956813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:33.346 [2024-12-09 10:19:03.956837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:33.346 [2024-12-09 10:19:03.956856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:33.346 [2024-12-09 10:19:03.956869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:33.346 [2024-12-09 10:19:03.956885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:33.346 [2024-12-09 10:19:03.956899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:33.346 [2024-12-09 10:19:03.956916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:33.346 [2024-12-09 10:19:03.956929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:33.346 [2024-12-09 10:19:03.956944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:33.346 [2024-12-09 10:19:03.956956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:33.346 [2024-12-09 10:19:03.956973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:33.346 [2024-12-09 10:19:03.956986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:33.346 [2024-12-09 10:19:03.957003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:33.346 [2024-12-09 10:19:03.957017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:33.346 [2024-12-09 10:19:03.957033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:33.346 [2024-12-09 10:19:03.957056] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:33.346 [2024-12-09 10:19:03.957074] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1416d288-a52a-4793-951a-3821cfb97ba2 00:27:33.346 [2024-12-09 10:19:03.957087] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:33.346 [2024-12-09 10:19:03.957101] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:33.346 [2024-12-09 10:19:03.957117] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:33.346 [2024-12-09 10:19:03.957133] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:33.346 [2024-12-09 10:19:03.957144] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:33.346 [2024-12-09 10:19:03.957159] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:33.346 [2024-12-09 10:19:03.957171] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:33.346 [2024-12-09 10:19:03.957184] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:33.346 [2024-12-09 10:19:03.957194] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:33.346 [2024-12-09 10:19:03.957219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.346 [2024-12-09 10:19:03.957231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:33.346 [2024-12-09 10:19:03.957251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.811 ms 00:27:33.346 [2024-12-09 10:19:03.957273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.346 [2024-12-09 10:19:03.975492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.346 [2024-12-09 10:19:03.975858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:33.346 [2024-12-09 10:19:03.975906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.150 ms 00:27:33.346 [2024-12-09 10:19:03.975920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.346 [2024-12-09 10:19:03.976544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.346 [2024-12-09 10:19:03.976571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:33.346 [2024-12-09 10:19:03.976590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.462 ms 00:27:33.346 [2024-12-09 10:19:03.976614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.346 [2024-12-09 10:19:04.037149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.346 [2024-12-09 10:19:04.037240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:33.346 [2024-12-09 10:19:04.037280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.346 [2024-12-09 10:19:04.037305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.346 [2024-12-09 10:19:04.037528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.346 [2024-12-09 10:19:04.037550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:33.346 [2024-12-09 10:19:04.037567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.346 [2024-12-09 10:19:04.037580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.346 [2024-12-09 10:19:04.037714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.346 [2024-12-09 10:19:04.037739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:33.346 [2024-12-09 10:19:04.037759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.346 [2024-12-09 10:19:04.037771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.346 [2024-12-09 10:19:04.037824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.346 [2024-12-09 10:19:04.037867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:33.346 [2024-12-09 10:19:04.037884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.346 [2024-12-09 10:19:04.037897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.605 [2024-12-09 10:19:04.159878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.605 [2024-12-09 10:19:04.159970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:33.605 [2024-12-09 10:19:04.159998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.605 [2024-12-09 10:19:04.160011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.605 [2024-12-09 10:19:04.254975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.605 [2024-12-09 10:19:04.255075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:33.605 [2024-12-09 10:19:04.255102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.605 [2024-12-09 10:19:04.255116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.605 [2024-12-09 10:19:04.255307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.605 [2024-12-09 10:19:04.255329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:33.605 [2024-12-09 10:19:04.255355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.605 [2024-12-09 10:19:04.255368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.605 [2024-12-09 10:19:04.255447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.605 [2024-12-09 10:19:04.255464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:33.605 [2024-12-09 10:19:04.255479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.605 [2024-12-09 10:19:04.255491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.605 [2024-12-09 10:19:04.255680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.605 [2024-12-09 10:19:04.255702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:33.605 [2024-12-09 10:19:04.255720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.605 [2024-12-09 10:19:04.255735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.605 [2024-12-09 10:19:04.255860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.605 [2024-12-09 10:19:04.255882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:33.605 [2024-12-09 10:19:04.255898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.605 [2024-12-09 10:19:04.255910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.605 [2024-12-09 10:19:04.255991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.605 [2024-12-09 10:19:04.256008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:33.605 [2024-12-09 10:19:04.256027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.605 [2024-12-09 10:19:04.256041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.605 [2024-12-09 10:19:04.256127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.605 [2024-12-09 10:19:04.256145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:33.605 [2024-12-09 10:19:04.256161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.605 [2024-12-09 10:19:04.256172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.605 [2024-12-09 10:19:04.256456] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 503.510 ms, result 0 00:27:33.605 true 00:27:33.605 10:19:04 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 78733 00:27:33.605 10:19:04 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78733 ']' 00:27:33.605 10:19:04 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78733 00:27:33.605 10:19:04 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:27:33.605 10:19:04 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:33.605 10:19:04 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78733 00:27:33.605 killing process with pid 78733 00:27:33.605 10:19:04 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:33.605 10:19:04 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:33.605 10:19:04 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78733' 00:27:33.605 10:19:04 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78733 00:27:33.605 10:19:04 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78733 00:27:38.871 10:19:09 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:27:39.808 65536+0 records in 00:27:39.808 65536+0 records out 00:27:39.808 268435456 bytes (268 MB, 256 MiB) copied, 1.08922 s, 246 MB/s 00:27:39.808 10:19:10 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:39.808 [2024-12-09 10:19:10.462459] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:27:39.808 [2024-12-09 10:19:10.462813] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78945 ] 00:27:40.066 [2024-12-09 10:19:10.645814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:40.067 [2024-12-09 10:19:10.807852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:40.636 [2024-12-09 10:19:11.200971] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:40.636 [2024-12-09 10:19:11.201064] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:40.636 [2024-12-09 10:19:11.369668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.636 [2024-12-09 10:19:11.369743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:40.636 [2024-12-09 10:19:11.369795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:40.636 [2024-12-09 10:19:11.369823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.636 [2024-12-09 10:19:11.373574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.636 [2024-12-09 10:19:11.373619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:40.636 [2024-12-09 10:19:11.373638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.723 ms 00:27:40.636 [2024-12-09 10:19:11.373650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.636 [2024-12-09 10:19:11.373816] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:40.636 [2024-12-09 10:19:11.374823] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:40.636 [2024-12-09 10:19:11.374901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.636 [2024-12-09 10:19:11.374920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:40.636 [2024-12-09 10:19:11.374933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.111 ms 00:27:40.636 [2024-12-09 10:19:11.374945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.636 [2024-12-09 10:19:11.377720] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:40.636 [2024-12-09 10:19:11.396658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.636 [2024-12-09 10:19:11.396764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:40.636 [2024-12-09 10:19:11.396788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.938 ms 00:27:40.636 [2024-12-09 10:19:11.396800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.636 [2024-12-09 10:19:11.396992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.636 [2024-12-09 10:19:11.397017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:40.636 [2024-12-09 10:19:11.397047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:27:40.636 [2024-12-09 10:19:11.397076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.636 [2024-12-09 10:19:11.408479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.636 [2024-12-09 10:19:11.408541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:40.636 [2024-12-09 10:19:11.408562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.331 ms 00:27:40.636 [2024-12-09 10:19:11.408574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.636 [2024-12-09 10:19:11.408812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.636 [2024-12-09 10:19:11.408866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:40.636 [2024-12-09 10:19:11.408883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:27:40.636 [2024-12-09 10:19:11.408896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.636 [2024-12-09 10:19:11.408955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.636 [2024-12-09 10:19:11.408974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:40.636 [2024-12-09 10:19:11.408988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:27:40.636 [2024-12-09 10:19:11.409001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.636 [2024-12-09 10:19:11.409037] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:27:40.636 [2024-12-09 10:19:11.414950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.636 [2024-12-09 10:19:11.415005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:40.636 [2024-12-09 10:19:11.415036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.924 ms 00:27:40.636 [2024-12-09 10:19:11.415063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.636 [2024-12-09 10:19:11.415150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.636 [2024-12-09 10:19:11.415169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:40.636 [2024-12-09 10:19:11.415182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:27:40.636 [2024-12-09 10:19:11.415193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.636 [2024-12-09 10:19:11.415232] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:40.636 [2024-12-09 10:19:11.415265] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:40.636 [2024-12-09 10:19:11.415306] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:40.636 [2024-12-09 10:19:11.415325] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:40.636 [2024-12-09 10:19:11.415426] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:40.636 [2024-12-09 10:19:11.415442] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:40.636 [2024-12-09 10:19:11.415456] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:40.636 [2024-12-09 10:19:11.415475] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:40.636 [2024-12-09 10:19:11.415489] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:40.636 [2024-12-09 10:19:11.415500] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:27:40.636 [2024-12-09 10:19:11.415512] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:40.636 [2024-12-09 10:19:11.415523] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:40.636 [2024-12-09 10:19:11.415533] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:40.636 [2024-12-09 10:19:11.415545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.636 [2024-12-09 10:19:11.415556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:40.636 [2024-12-09 10:19:11.415568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:27:40.636 [2024-12-09 10:19:11.415579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.636 [2024-12-09 10:19:11.415670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.636 [2024-12-09 10:19:11.415744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:40.636 [2024-12-09 10:19:11.415764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:27:40.636 [2024-12-09 10:19:11.415777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.636 [2024-12-09 10:19:11.415923] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:40.636 [2024-12-09 10:19:11.415947] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:40.636 [2024-12-09 10:19:11.415961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:40.636 [2024-12-09 10:19:11.415974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:40.636 [2024-12-09 10:19:11.415987] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:40.636 [2024-12-09 10:19:11.416014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:40.636 [2024-12-09 10:19:11.416026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:27:40.636 [2024-12-09 10:19:11.416039] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:40.636 [2024-12-09 10:19:11.416051] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:27:40.636 [2024-12-09 10:19:11.416062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:40.636 [2024-12-09 10:19:11.416074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:40.636 [2024-12-09 10:19:11.416114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:27:40.636 [2024-12-09 10:19:11.416140] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:40.636 [2024-12-09 10:19:11.416151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:40.636 [2024-12-09 10:19:11.416162] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:27:40.636 [2024-12-09 10:19:11.416173] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:40.636 [2024-12-09 10:19:11.416183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:40.636 [2024-12-09 10:19:11.416196] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:27:40.636 [2024-12-09 10:19:11.416206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:40.636 [2024-12-09 10:19:11.416217] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:40.636 [2024-12-09 10:19:11.416227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:27:40.636 [2024-12-09 10:19:11.416238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:40.636 [2024-12-09 10:19:11.416249] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:40.636 [2024-12-09 10:19:11.416259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:27:40.636 [2024-12-09 10:19:11.416269] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:40.637 [2024-12-09 10:19:11.416278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:40.637 [2024-12-09 10:19:11.416288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:27:40.637 [2024-12-09 10:19:11.416298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:40.637 [2024-12-09 10:19:11.416308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:40.637 [2024-12-09 10:19:11.416318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:27:40.637 [2024-12-09 10:19:11.416344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:40.637 [2024-12-09 10:19:11.416369] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:40.637 [2024-12-09 10:19:11.416395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:27:40.637 [2024-12-09 10:19:11.416406] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:40.637 [2024-12-09 10:19:11.416417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:40.637 [2024-12-09 10:19:11.416427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:27:40.637 [2024-12-09 10:19:11.416437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:40.637 [2024-12-09 10:19:11.416463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:40.637 [2024-12-09 10:19:11.416474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:27:40.637 [2024-12-09 10:19:11.416485] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:40.637 [2024-12-09 10:19:11.416495] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:40.637 [2024-12-09 10:19:11.416506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:27:40.637 [2024-12-09 10:19:11.416517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:40.637 [2024-12-09 10:19:11.416527] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:40.637 [2024-12-09 10:19:11.416540] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:40.637 [2024-12-09 10:19:11.416559] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:40.637 [2024-12-09 10:19:11.416570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:40.637 [2024-12-09 10:19:11.416582] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:40.637 [2024-12-09 10:19:11.416594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:40.637 [2024-12-09 10:19:11.416607] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:40.637 [2024-12-09 10:19:11.416619] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:40.637 [2024-12-09 10:19:11.416630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:40.637 [2024-12-09 10:19:11.416641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:40.637 [2024-12-09 10:19:11.416654] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:40.637 [2024-12-09 10:19:11.416670] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:40.637 [2024-12-09 10:19:11.416693] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:27:40.637 [2024-12-09 10:19:11.416713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:27:40.637 [2024-12-09 10:19:11.416725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:27:40.637 [2024-12-09 10:19:11.416736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:27:40.637 [2024-12-09 10:19:11.416748] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:27:40.637 [2024-12-09 10:19:11.416760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:27:40.637 [2024-12-09 10:19:11.416772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:27:40.637 [2024-12-09 10:19:11.416784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:27:40.637 [2024-12-09 10:19:11.416795] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:27:40.637 [2024-12-09 10:19:11.416806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:27:40.637 [2024-12-09 10:19:11.416818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:27:40.637 [2024-12-09 10:19:11.416829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:27:40.637 [2024-12-09 10:19:11.416841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:27:40.637 [2024-12-09 10:19:11.416853] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:27:40.637 [2024-12-09 10:19:11.416864] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:40.637 [2024-12-09 10:19:11.416878] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:40.637 [2024-12-09 10:19:11.416890] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:40.637 [2024-12-09 10:19:11.416902] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:40.637 [2024-12-09 10:19:11.416926] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:40.637 [2024-12-09 10:19:11.416941] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:40.637 [2024-12-09 10:19:11.416955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.637 [2024-12-09 10:19:11.416975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:40.637 [2024-12-09 10:19:11.416987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.122 ms 00:27:40.637 [2024-12-09 10:19:11.416999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.897 [2024-12-09 10:19:11.462252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.897 [2024-12-09 10:19:11.462329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:40.897 [2024-12-09 10:19:11.462360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.150 ms 00:27:40.897 [2024-12-09 10:19:11.462373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.897 [2024-12-09 10:19:11.462643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.897 [2024-12-09 10:19:11.462663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:40.897 [2024-12-09 10:19:11.462676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:27:40.897 [2024-12-09 10:19:11.462688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.897 [2024-12-09 10:19:11.516134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.897 [2024-12-09 10:19:11.516212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:40.897 [2024-12-09 10:19:11.516233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.412 ms 00:27:40.897 [2024-12-09 10:19:11.516246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.897 [2024-12-09 10:19:11.516400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.897 [2024-12-09 10:19:11.516421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:40.897 [2024-12-09 10:19:11.516436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:40.897 [2024-12-09 10:19:11.516448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.897 [2024-12-09 10:19:11.517074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.897 [2024-12-09 10:19:11.517095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:40.897 [2024-12-09 10:19:11.517117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.596 ms 00:27:40.897 [2024-12-09 10:19:11.517129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.897 [2024-12-09 10:19:11.517326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.897 [2024-12-09 10:19:11.517363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:40.897 [2024-12-09 10:19:11.517377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.163 ms 00:27:40.897 [2024-12-09 10:19:11.517389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.897 [2024-12-09 10:19:11.537797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.897 [2024-12-09 10:19:11.537900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:40.897 [2024-12-09 10:19:11.537937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.374 ms 00:27:40.897 [2024-12-09 10:19:11.537952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.897 [2024-12-09 10:19:11.554807] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:27:40.897 [2024-12-09 10:19:11.555087] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:40.897 [2024-12-09 10:19:11.555114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.897 [2024-12-09 10:19:11.555127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:40.897 [2024-12-09 10:19:11.555142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.935 ms 00:27:40.897 [2024-12-09 10:19:11.555155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.897 [2024-12-09 10:19:11.585851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.897 [2024-12-09 10:19:11.585930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:40.897 [2024-12-09 10:19:11.585953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.579 ms 00:27:40.897 [2024-12-09 10:19:11.585966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.897 [2024-12-09 10:19:11.602956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.897 [2024-12-09 10:19:11.603239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:40.897 [2024-12-09 10:19:11.603269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.800 ms 00:27:40.897 [2024-12-09 10:19:11.603285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.897 [2024-12-09 10:19:11.617648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.897 [2024-12-09 10:19:11.617705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:40.897 [2024-12-09 10:19:11.617722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.237 ms 00:27:40.897 [2024-12-09 10:19:11.617734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.897 [2024-12-09 10:19:11.618841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.897 [2024-12-09 10:19:11.619022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:40.897 [2024-12-09 10:19:11.619052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.893 ms 00:27:40.897 [2024-12-09 10:19:11.619066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.156 [2024-12-09 10:19:11.699721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.156 [2024-12-09 10:19:11.699796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:41.156 [2024-12-09 10:19:11.699818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 80.610 ms 00:27:41.156 [2024-12-09 10:19:11.699862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.156 [2024-12-09 10:19:11.712141] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:27:41.156 [2024-12-09 10:19:11.733019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.156 [2024-12-09 10:19:11.733097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:41.156 [2024-12-09 10:19:11.733124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.943 ms 00:27:41.156 [2024-12-09 10:19:11.733137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.156 [2024-12-09 10:19:11.733315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.156 [2024-12-09 10:19:11.733338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:41.156 [2024-12-09 10:19:11.733352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:41.156 [2024-12-09 10:19:11.733364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.156 [2024-12-09 10:19:11.733444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.156 [2024-12-09 10:19:11.733462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:41.156 [2024-12-09 10:19:11.733475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:27:41.156 [2024-12-09 10:19:11.733486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.156 [2024-12-09 10:19:11.733536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.156 [2024-12-09 10:19:11.733560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:41.156 [2024-12-09 10:19:11.733573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:27:41.156 [2024-12-09 10:19:11.733584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.156 [2024-12-09 10:19:11.733633] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:41.156 [2024-12-09 10:19:11.733651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.156 [2024-12-09 10:19:11.733662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:41.156 [2024-12-09 10:19:11.733675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:27:41.156 [2024-12-09 10:19:11.733686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.156 [2024-12-09 10:19:11.764275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.156 [2024-12-09 10:19:11.764321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:41.156 [2024-12-09 10:19:11.764341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.562 ms 00:27:41.156 [2024-12-09 10:19:11.764353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.156 [2024-12-09 10:19:11.764524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.156 [2024-12-09 10:19:11.764552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:41.156 [2024-12-09 10:19:11.764567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:27:41.157 [2024-12-09 10:19:11.764578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.157 [2024-12-09 10:19:11.766005] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:41.157 [2024-12-09 10:19:11.770006] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 395.918 ms, result 0 00:27:41.157 [2024-12-09 10:19:11.770914] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:41.157 [2024-12-09 10:19:11.786508] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:42.094  [2024-12-09T10:19:13.825Z] Copying: 22/256 [MB] (22 MBps) [2024-12-09T10:19:15.203Z] Copying: 44/256 [MB] (21 MBps) [2024-12-09T10:19:16.138Z] Copying: 65/256 [MB] (20 MBps) [2024-12-09T10:19:17.072Z] Copying: 86/256 [MB] (21 MBps) [2024-12-09T10:19:18.008Z] Copying: 108/256 [MB] (21 MBps) [2024-12-09T10:19:18.944Z] Copying: 130/256 [MB] (22 MBps) [2024-12-09T10:19:19.969Z] Copying: 151/256 [MB] (20 MBps) [2024-12-09T10:19:20.904Z] Copying: 172/256 [MB] (21 MBps) [2024-12-09T10:19:21.840Z] Copying: 195/256 [MB] (22 MBps) [2024-12-09T10:19:23.216Z] Copying: 217/256 [MB] (22 MBps) [2024-12-09T10:19:23.800Z] Copying: 240/256 [MB] (22 MBps) [2024-12-09T10:19:23.800Z] Copying: 256/256 [MB] (average 21 MBps)[2024-12-09 10:19:23.515028] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:53.003 [2024-12-09 10:19:23.527382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.003 [2024-12-09 10:19:23.527427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:53.003 [2024-12-09 10:19:23.527450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:53.003 [2024-12-09 10:19:23.527471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.003 [2024-12-09 10:19:23.527506] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:27:53.003 [2024-12-09 10:19:23.530978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.003 [2024-12-09 10:19:23.531018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:53.003 [2024-12-09 10:19:23.531035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.448 ms 00:27:53.003 [2024-12-09 10:19:23.531049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.003 [2024-12-09 10:19:23.533089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.003 [2024-12-09 10:19:23.533306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:53.003 [2024-12-09 10:19:23.533336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.006 ms 00:27:53.003 [2024-12-09 10:19:23.533350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.003 [2024-12-09 10:19:23.540089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.003 [2024-12-09 10:19:23.540313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:53.003 [2024-12-09 10:19:23.540343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.707 ms 00:27:53.003 [2024-12-09 10:19:23.540357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.003 [2024-12-09 10:19:23.547000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.003 [2024-12-09 10:19:23.547042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:53.003 [2024-12-09 10:19:23.547059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.574 ms 00:27:53.003 [2024-12-09 10:19:23.547071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.003 [2024-12-09 10:19:23.577349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.003 [2024-12-09 10:19:23.577396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:53.003 [2024-12-09 10:19:23.577416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.218 ms 00:27:53.003 [2024-12-09 10:19:23.577429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.003 [2024-12-09 10:19:23.594847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.003 [2024-12-09 10:19:23.594912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:53.003 [2024-12-09 10:19:23.594937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.351 ms 00:27:53.003 [2024-12-09 10:19:23.594951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.003 [2024-12-09 10:19:23.595107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.003 [2024-12-09 10:19:23.595130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:53.003 [2024-12-09 10:19:23.595145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:27:53.003 [2024-12-09 10:19:23.595174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.003 [2024-12-09 10:19:23.624225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.003 [2024-12-09 10:19:23.624288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:53.003 [2024-12-09 10:19:23.624332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.024 ms 00:27:53.003 [2024-12-09 10:19:23.624345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.003 [2024-12-09 10:19:23.654192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.003 [2024-12-09 10:19:23.654238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:53.003 [2024-12-09 10:19:23.654257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.781 ms 00:27:53.003 [2024-12-09 10:19:23.654269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.003 [2024-12-09 10:19:23.684118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.003 [2024-12-09 10:19:23.684399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:53.003 [2024-12-09 10:19:23.684430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.782 ms 00:27:53.003 [2024-12-09 10:19:23.684446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.003 [2024-12-09 10:19:23.715432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.003 [2024-12-09 10:19:23.715477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:53.003 [2024-12-09 10:19:23.715495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.877 ms 00:27:53.003 [2024-12-09 10:19:23.715508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.003 [2024-12-09 10:19:23.715619] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:53.003 [2024-12-09 10:19:23.715663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:53.003 [2024-12-09 10:19:23.715690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:53.003 [2024-12-09 10:19:23.715719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:53.003 [2024-12-09 10:19:23.715733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:53.003 [2024-12-09 10:19:23.715747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:53.003 [2024-12-09 10:19:23.715761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:53.003 [2024-12-09 10:19:23.715774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:53.003 [2024-12-09 10:19:23.715786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:53.003 [2024-12-09 10:19:23.715799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:53.003 [2024-12-09 10:19:23.715828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:53.003 [2024-12-09 10:19:23.715842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:53.003 [2024-12-09 10:19:23.715856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:53.003 [2024-12-09 10:19:23.715869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:53.003 [2024-12-09 10:19:23.715952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:53.003 [2024-12-09 10:19:23.715982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:53.003 [2024-12-09 10:19:23.716015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:53.003 [2024-12-09 10:19:23.716030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:53.003 [2024-12-09 10:19:23.716044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:53.003 [2024-12-09 10:19:23.716058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:53.003 [2024-12-09 10:19:23.716073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:53.003 [2024-12-09 10:19:23.716088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:53.003 [2024-12-09 10:19:23.716120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:53.003 [2024-12-09 10:19:23.716134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:53.003 [2024-12-09 10:19:23.716164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:53.003 [2024-12-09 10:19:23.716177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:53.003 [2024-12-09 10:19:23.716190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:53.003 [2024-12-09 10:19:23.716203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:53.003 [2024-12-09 10:19:23.716216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:53.003 [2024-12-09 10:19:23.716237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:53.003 [2024-12-09 10:19:23.716251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:53.003 [2024-12-09 10:19:23.716263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:53.003 [2024-12-09 10:19:23.716276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:53.003 [2024-12-09 10:19:23.716290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:53.003 [2024-12-09 10:19:23.716304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:53.003 [2024-12-09 10:19:23.716334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:53.003 [2024-12-09 10:19:23.716349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:53.003 [2024-12-09 10:19:23.716379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:53.003 [2024-12-09 10:19:23.716393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:53.003 [2024-12-09 10:19:23.716422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:53.003 [2024-12-09 10:19:23.716435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:53.004 [2024-12-09 10:19:23.716465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:53.004 [2024-12-09 10:19:23.716479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:53.004 [2024-12-09 10:19:23.716494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:53.004 [2024-12-09 10:19:23.716508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:53.004 [2024-12-09 10:19:23.716522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:53.004 [2024-12-09 10:19:23.716535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:53.004 [2024-12-09 10:19:23.716550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:53.004 [2024-12-09 10:19:23.716579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:53.004 [2024-12-09 10:19:23.716595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:53.004 [2024-12-09 10:19:23.716624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:53.004 [2024-12-09 10:19:23.716655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:53.004 [2024-12-09 10:19:23.716668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:53.004 [2024-12-09 10:19:23.716700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:53.004 [2024-12-09 10:19:23.716730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:53.004 [2024-12-09 10:19:23.716744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:53.004 [2024-12-09 10:19:23.716773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:53.004 [2024-12-09 10:19:23.716787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:53.004 [2024-12-09 10:19:23.716800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:53.004 [2024-12-09 10:19:23.716830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:53.004 [2024-12-09 10:19:23.716844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:53.004 [2024-12-09 10:19:23.716857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:53.004 [2024-12-09 10:19:23.716870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:53.004 [2024-12-09 10:19:23.716884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:53.004 [2024-12-09 10:19:23.716897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:53.004 [2024-12-09 10:19:23.716910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:53.004 [2024-12-09 10:19:23.716923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:53.004 [2024-12-09 10:19:23.716938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:53.004 [2024-12-09 10:19:23.716952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:53.004 [2024-12-09 10:19:23.716966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:53.004 [2024-12-09 10:19:23.716980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:53.004 [2024-12-09 10:19:23.717009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:53.004 [2024-12-09 10:19:23.717023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:53.004 [2024-12-09 10:19:23.717035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:53.004 [2024-12-09 10:19:23.717049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:53.004 [2024-12-09 10:19:23.717062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:53.004 [2024-12-09 10:19:23.717090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:53.004 [2024-12-09 10:19:23.717108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:53.004 [2024-12-09 10:19:23.717122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:53.004 [2024-12-09 10:19:23.717150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:53.004 [2024-12-09 10:19:23.717165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:53.004 [2024-12-09 10:19:23.717194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:53.004 [2024-12-09 10:19:23.717208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:53.004 [2024-12-09 10:19:23.717238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:53.004 [2024-12-09 10:19:23.717251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:53.004 [2024-12-09 10:19:23.717265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:53.004 [2024-12-09 10:19:23.717278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:53.004 [2024-12-09 10:19:23.717290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:53.004 [2024-12-09 10:19:23.717303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:53.004 [2024-12-09 10:19:23.717333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:53.004 [2024-12-09 10:19:23.717347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:53.004 [2024-12-09 10:19:23.717379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:53.004 [2024-12-09 10:19:23.717393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:53.004 [2024-12-09 10:19:23.717407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:53.004 [2024-12-09 10:19:23.717423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:53.004 [2024-12-09 10:19:23.717454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:53.004 [2024-12-09 10:19:23.717469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:53.004 [2024-12-09 10:19:23.717484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:53.004 [2024-12-09 10:19:23.717497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:53.004 [2024-12-09 10:19:23.717524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:53.004 [2024-12-09 10:19:23.717539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:53.004 [2024-12-09 10:19:23.717572] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:53.004 [2024-12-09 10:19:23.717586] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1416d288-a52a-4793-951a-3821cfb97ba2 00:27:53.004 [2024-12-09 10:19:23.717601] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:53.004 [2024-12-09 10:19:23.717615] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:53.004 [2024-12-09 10:19:23.717628] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:53.004 [2024-12-09 10:19:23.717642] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:53.004 [2024-12-09 10:19:23.717655] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:53.004 [2024-12-09 10:19:23.717670] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:53.004 [2024-12-09 10:19:23.717699] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:53.004 [2024-12-09 10:19:23.717728] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:53.004 [2024-12-09 10:19:23.717740] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:53.004 [2024-12-09 10:19:23.717754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.004 [2024-12-09 10:19:23.717789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:53.004 [2024-12-09 10:19:23.717805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.137 ms 00:27:53.004 [2024-12-09 10:19:23.717835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.004 [2024-12-09 10:19:23.736529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.004 [2024-12-09 10:19:23.736577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:53.004 [2024-12-09 10:19:23.736606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.663 ms 00:27:53.004 [2024-12-09 10:19:23.736620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.004 [2024-12-09 10:19:23.737240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.004 [2024-12-09 10:19:23.737270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:53.004 [2024-12-09 10:19:23.737302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.556 ms 00:27:53.004 [2024-12-09 10:19:23.737316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.004 [2024-12-09 10:19:23.789558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:53.004 [2024-12-09 10:19:23.789615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:53.004 [2024-12-09 10:19:23.789651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:53.004 [2024-12-09 10:19:23.789665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.004 [2024-12-09 10:19:23.789786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:53.004 [2024-12-09 10:19:23.789808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:53.004 [2024-12-09 10:19:23.789839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:53.004 [2024-12-09 10:19:23.789852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.004 [2024-12-09 10:19:23.789953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:53.004 [2024-12-09 10:19:23.789974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:53.004 [2024-12-09 10:19:23.789988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:53.004 [2024-12-09 10:19:23.790001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.004 [2024-12-09 10:19:23.790031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:53.005 [2024-12-09 10:19:23.790055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:53.005 [2024-12-09 10:19:23.790069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:53.005 [2024-12-09 10:19:23.790111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.263 [2024-12-09 10:19:23.896854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:53.263 [2024-12-09 10:19:23.897000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:53.263 [2024-12-09 10:19:23.897025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:53.263 [2024-12-09 10:19:23.897039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.263 [2024-12-09 10:19:23.989908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:53.263 [2024-12-09 10:19:23.989993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:53.263 [2024-12-09 10:19:23.990017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:53.263 [2024-12-09 10:19:23.990031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.263 [2024-12-09 10:19:23.990173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:53.263 [2024-12-09 10:19:23.990200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:53.263 [2024-12-09 10:19:23.990216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:53.263 [2024-12-09 10:19:23.990232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.263 [2024-12-09 10:19:23.990278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:53.263 [2024-12-09 10:19:23.990298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:53.263 [2024-12-09 10:19:23.990334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:53.263 [2024-12-09 10:19:23.990348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.263 [2024-12-09 10:19:23.990498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:53.263 [2024-12-09 10:19:23.990523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:53.263 [2024-12-09 10:19:23.990541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:53.263 [2024-12-09 10:19:23.990554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.263 [2024-12-09 10:19:23.990638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:53.263 [2024-12-09 10:19:23.990660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:53.263 [2024-12-09 10:19:23.990675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:53.263 [2024-12-09 10:19:23.990698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.263 [2024-12-09 10:19:23.990789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:53.263 [2024-12-09 10:19:23.990810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:53.263 [2024-12-09 10:19:23.990825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:53.263 [2024-12-09 10:19:23.990838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.263 [2024-12-09 10:19:23.990932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:53.263 [2024-12-09 10:19:23.990956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:53.263 [2024-12-09 10:19:23.990980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:53.263 [2024-12-09 10:19:23.990994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.263 [2024-12-09 10:19:23.991230] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 463.816 ms, result 0 00:27:54.638 00:27:54.638 00:27:54.895 10:19:25 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=79098 00:27:54.895 10:19:25 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:27:54.895 10:19:25 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 79098 00:27:54.895 10:19:25 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 79098 ']' 00:27:54.895 10:19:25 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:54.895 10:19:25 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:54.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:54.895 10:19:25 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:54.895 10:19:25 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:54.895 10:19:25 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:27:54.895 [2024-12-09 10:19:25.578514] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:27:54.896 [2024-12-09 10:19:25.578716] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79098 ] 00:27:55.153 [2024-12-09 10:19:25.766263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:55.153 [2024-12-09 10:19:25.908605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:56.090 10:19:26 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:56.090 10:19:26 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:27:56.090 10:19:26 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:27:56.349 [2024-12-09 10:19:27.132079] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:56.349 [2024-12-09 10:19:27.132184] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:56.609 [2024-12-09 10:19:27.319571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.609 [2024-12-09 10:19:27.319652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:56.609 [2024-12-09 10:19:27.319698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:56.609 [2024-12-09 10:19:27.319712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.609 [2024-12-09 10:19:27.323017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.609 [2024-12-09 10:19:27.323061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:56.609 [2024-12-09 10:19:27.323084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.276 ms 00:27:56.609 [2024-12-09 10:19:27.323098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.609 [2024-12-09 10:19:27.323232] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:56.609 [2024-12-09 10:19:27.324153] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:56.609 [2024-12-09 10:19:27.324203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.609 [2024-12-09 10:19:27.324232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:56.609 [2024-12-09 10:19:27.324277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.984 ms 00:27:56.609 [2024-12-09 10:19:27.324317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.609 [2024-12-09 10:19:27.326542] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:56.609 [2024-12-09 10:19:27.342020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.609 [2024-12-09 10:19:27.342258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:56.609 [2024-12-09 10:19:27.342291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.486 ms 00:27:56.609 [2024-12-09 10:19:27.342311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.609 [2024-12-09 10:19:27.342455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.609 [2024-12-09 10:19:27.342485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:56.609 [2024-12-09 10:19:27.342500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:27:56.609 [2024-12-09 10:19:27.342518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.609 [2024-12-09 10:19:27.354729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.609 [2024-12-09 10:19:27.354800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:56.609 [2024-12-09 10:19:27.354837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.126 ms 00:27:56.609 [2024-12-09 10:19:27.354882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.609 [2024-12-09 10:19:27.355110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.609 [2024-12-09 10:19:27.355141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:56.609 [2024-12-09 10:19:27.355158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:27:56.609 [2024-12-09 10:19:27.355182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.609 [2024-12-09 10:19:27.355230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.609 [2024-12-09 10:19:27.355254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:56.609 [2024-12-09 10:19:27.355284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:27:56.609 [2024-12-09 10:19:27.355301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.609 [2024-12-09 10:19:27.355359] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:27:56.609 [2024-12-09 10:19:27.361166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.609 [2024-12-09 10:19:27.361210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:56.609 [2024-12-09 10:19:27.361230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.814 ms 00:27:56.609 [2024-12-09 10:19:27.361244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.609 [2024-12-09 10:19:27.361321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.609 [2024-12-09 10:19:27.361343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:56.609 [2024-12-09 10:19:27.361360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:27:56.609 [2024-12-09 10:19:27.361375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.609 [2024-12-09 10:19:27.361413] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:56.609 [2024-12-09 10:19:27.361447] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:56.609 [2024-12-09 10:19:27.361507] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:56.609 [2024-12-09 10:19:27.361535] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:56.609 [2024-12-09 10:19:27.361638] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:56.609 [2024-12-09 10:19:27.361657] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:56.609 [2024-12-09 10:19:27.361681] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:56.609 [2024-12-09 10:19:27.361698] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:56.609 [2024-12-09 10:19:27.361716] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:56.609 [2024-12-09 10:19:27.361729] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:27:56.609 [2024-12-09 10:19:27.361745] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:56.609 [2024-12-09 10:19:27.361757] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:56.609 [2024-12-09 10:19:27.361774] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:56.609 [2024-12-09 10:19:27.361788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.609 [2024-12-09 10:19:27.361803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:56.609 [2024-12-09 10:19:27.361817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.383 ms 00:27:56.609 [2024-12-09 10:19:27.361865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.609 [2024-12-09 10:19:27.361965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.609 [2024-12-09 10:19:27.361989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:56.609 [2024-12-09 10:19:27.362003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:27:56.610 [2024-12-09 10:19:27.362018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.610 [2024-12-09 10:19:27.362156] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:56.610 [2024-12-09 10:19:27.362182] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:56.610 [2024-12-09 10:19:27.362198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:56.610 [2024-12-09 10:19:27.362214] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:56.610 [2024-12-09 10:19:27.362228] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:56.610 [2024-12-09 10:19:27.362246] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:56.610 [2024-12-09 10:19:27.362259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:27:56.610 [2024-12-09 10:19:27.362276] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:56.610 [2024-12-09 10:19:27.362289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:27:56.610 [2024-12-09 10:19:27.362304] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:56.610 [2024-12-09 10:19:27.362316] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:56.610 [2024-12-09 10:19:27.362331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:27:56.610 [2024-12-09 10:19:27.362344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:56.610 [2024-12-09 10:19:27.362359] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:56.610 [2024-12-09 10:19:27.362373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:27:56.610 [2024-12-09 10:19:27.362390] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:56.610 [2024-12-09 10:19:27.362402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:56.610 [2024-12-09 10:19:27.362433] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:27:56.610 [2024-12-09 10:19:27.362472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:56.610 [2024-12-09 10:19:27.362488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:56.610 [2024-12-09 10:19:27.362500] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:27:56.610 [2024-12-09 10:19:27.362514] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:56.610 [2024-12-09 10:19:27.362525] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:56.610 [2024-12-09 10:19:27.362541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:27:56.610 [2024-12-09 10:19:27.362553] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:56.610 [2024-12-09 10:19:27.362567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:56.610 [2024-12-09 10:19:27.362578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:27:56.610 [2024-12-09 10:19:27.362597] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:56.610 [2024-12-09 10:19:27.362609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:56.610 [2024-12-09 10:19:27.362625] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:27:56.610 [2024-12-09 10:19:27.362636] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:56.610 [2024-12-09 10:19:27.362650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:56.610 [2024-12-09 10:19:27.362661] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:27:56.610 [2024-12-09 10:19:27.362675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:56.610 [2024-12-09 10:19:27.362687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:56.610 [2024-12-09 10:19:27.362701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:27:56.610 [2024-12-09 10:19:27.362712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:56.610 [2024-12-09 10:19:27.362726] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:56.610 [2024-12-09 10:19:27.362738] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:27:56.610 [2024-12-09 10:19:27.362754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:56.610 [2024-12-09 10:19:27.362765] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:56.610 [2024-12-09 10:19:27.362779] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:27:56.610 [2024-12-09 10:19:27.362791] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:56.610 [2024-12-09 10:19:27.362805] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:56.610 [2024-12-09 10:19:27.362820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:56.610 [2024-12-09 10:19:27.362835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:56.610 [2024-12-09 10:19:27.362858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:56.610 [2024-12-09 10:19:27.362891] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:56.610 [2024-12-09 10:19:27.362908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:56.610 [2024-12-09 10:19:27.362923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:56.610 [2024-12-09 10:19:27.362935] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:56.610 [2024-12-09 10:19:27.362949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:56.610 [2024-12-09 10:19:27.362961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:56.610 [2024-12-09 10:19:27.362977] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:56.610 [2024-12-09 10:19:27.362992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:56.610 [2024-12-09 10:19:27.363013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:27:56.610 [2024-12-09 10:19:27.363025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:27:56.610 [2024-12-09 10:19:27.363039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:27:56.610 [2024-12-09 10:19:27.363052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:27:56.610 [2024-12-09 10:19:27.363066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:27:56.610 [2024-12-09 10:19:27.363078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:27:56.610 [2024-12-09 10:19:27.363117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:27:56.610 [2024-12-09 10:19:27.363130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:27:56.610 [2024-12-09 10:19:27.363144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:27:56.610 [2024-12-09 10:19:27.363157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:27:56.610 [2024-12-09 10:19:27.363172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:27:56.610 [2024-12-09 10:19:27.363183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:27:56.610 [2024-12-09 10:19:27.363199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:27:56.610 [2024-12-09 10:19:27.363211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:27:56.610 [2024-12-09 10:19:27.363233] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:56.610 [2024-12-09 10:19:27.363247] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:56.610 [2024-12-09 10:19:27.363265] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:56.610 [2024-12-09 10:19:27.363278] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:56.610 [2024-12-09 10:19:27.363294] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:56.610 [2024-12-09 10:19:27.363306] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:56.610 [2024-12-09 10:19:27.363322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.610 [2024-12-09 10:19:27.363335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:56.610 [2024-12-09 10:19:27.363350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.252 ms 00:27:56.610 [2024-12-09 10:19:27.363366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.610 [2024-12-09 10:19:27.404470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.610 [2024-12-09 10:19:27.404542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:56.610 [2024-12-09 10:19:27.404579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.011 ms 00:27:56.610 [2024-12-09 10:19:27.404597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.610 [2024-12-09 10:19:27.404806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.610 [2024-12-09 10:19:27.404848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:56.610 [2024-12-09 10:19:27.404889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:27:56.610 [2024-12-09 10:19:27.404904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.869 [2024-12-09 10:19:27.455264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.869 [2024-12-09 10:19:27.455341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:56.869 [2024-12-09 10:19:27.455377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.284 ms 00:27:56.869 [2024-12-09 10:19:27.455391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.869 [2024-12-09 10:19:27.455567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.869 [2024-12-09 10:19:27.455589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:56.869 [2024-12-09 10:19:27.455625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:56.869 [2024-12-09 10:19:27.455639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.869 [2024-12-09 10:19:27.456550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.869 [2024-12-09 10:19:27.456592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:56.869 [2024-12-09 10:19:27.456614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.875 ms 00:27:56.869 [2024-12-09 10:19:27.456658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.869 [2024-12-09 10:19:27.456871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.869 [2024-12-09 10:19:27.456913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:56.869 [2024-12-09 10:19:27.456936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.159 ms 00:27:56.869 [2024-12-09 10:19:27.456951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.869 [2024-12-09 10:19:27.482577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.869 [2024-12-09 10:19:27.482941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:56.869 [2024-12-09 10:19:27.482986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.580 ms 00:27:56.869 [2024-12-09 10:19:27.483003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.869 [2024-12-09 10:19:27.512774] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:27:56.870 [2024-12-09 10:19:27.512823] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:56.870 [2024-12-09 10:19:27.512899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.870 [2024-12-09 10:19:27.512916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:56.870 [2024-12-09 10:19:27.512938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.645 ms 00:27:56.870 [2024-12-09 10:19:27.512967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.870 [2024-12-09 10:19:27.542874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.870 [2024-12-09 10:19:27.542951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:56.870 [2024-12-09 10:19:27.542990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.774 ms 00:27:56.870 [2024-12-09 10:19:27.543005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.870 [2024-12-09 10:19:27.559186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.870 [2024-12-09 10:19:27.559471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:56.870 [2024-12-09 10:19:27.559513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.069 ms 00:27:56.870 [2024-12-09 10:19:27.559530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.870 [2024-12-09 10:19:27.575021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.870 [2024-12-09 10:19:27.575091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:56.870 [2024-12-09 10:19:27.575117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.326 ms 00:27:56.870 [2024-12-09 10:19:27.575131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.870 [2024-12-09 10:19:27.576118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.870 [2024-12-09 10:19:27.576167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:56.870 [2024-12-09 10:19:27.576191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.795 ms 00:27:56.870 [2024-12-09 10:19:27.576205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.870 [2024-12-09 10:19:27.657428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.870 [2024-12-09 10:19:27.657515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:56.870 [2024-12-09 10:19:27.657544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.162 ms 00:27:56.870 [2024-12-09 10:19:27.657558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.128 [2024-12-09 10:19:27.668448] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:27:57.128 [2024-12-09 10:19:27.694340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.128 [2024-12-09 10:19:27.694458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:57.128 [2024-12-09 10:19:27.694489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.588 ms 00:27:57.128 [2024-12-09 10:19:27.694509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.128 [2024-12-09 10:19:27.694813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.128 [2024-12-09 10:19:27.694843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:57.128 [2024-12-09 10:19:27.694859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:27:57.128 [2024-12-09 10:19:27.694875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.128 [2024-12-09 10:19:27.695042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.128 [2024-12-09 10:19:27.695071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:57.128 [2024-12-09 10:19:27.695087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:27:57.128 [2024-12-09 10:19:27.695109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.128 [2024-12-09 10:19:27.695149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.128 [2024-12-09 10:19:27.695190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:57.128 [2024-12-09 10:19:27.695206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:57.128 [2024-12-09 10:19:27.695222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.128 [2024-12-09 10:19:27.695278] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:57.128 [2024-12-09 10:19:27.695320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.129 [2024-12-09 10:19:27.695338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:57.129 [2024-12-09 10:19:27.695355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:27:57.129 [2024-12-09 10:19:27.695369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.129 [2024-12-09 10:19:27.727209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.129 [2024-12-09 10:19:27.727319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:57.129 [2024-12-09 10:19:27.727354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.792 ms 00:27:57.129 [2024-12-09 10:19:27.727369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.129 [2024-12-09 10:19:27.727527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.129 [2024-12-09 10:19:27.727549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:57.129 [2024-12-09 10:19:27.727586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:27:57.129 [2024-12-09 10:19:27.727603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.129 [2024-12-09 10:19:27.729226] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:57.129 [2024-12-09 10:19:27.732911] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 409.276 ms, result 0 00:27:57.129 [2024-12-09 10:19:27.734279] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:57.129 Some configs were skipped because the RPC state that can call them passed over. 00:27:57.129 10:19:27 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:27:57.388 [2024-12-09 10:19:28.072484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.388 [2024-12-09 10:19:28.072892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:27:57.388 [2024-12-09 10:19:28.072928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.801 ms 00:27:57.388 [2024-12-09 10:19:28.072948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.388 [2024-12-09 10:19:28.073011] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.331 ms, result 0 00:27:57.388 true 00:27:57.388 10:19:28 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:27:57.647 [2024-12-09 10:19:28.356291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.647 [2024-12-09 10:19:28.356583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:27:57.647 [2024-12-09 10:19:28.356761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.257 ms 00:27:57.647 [2024-12-09 10:19:28.356940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.647 [2024-12-09 10:19:28.357082] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.043 ms, result 0 00:27:57.647 true 00:27:57.647 10:19:28 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 79098 00:27:57.647 10:19:28 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 79098 ']' 00:27:57.647 10:19:28 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 79098 00:27:57.647 10:19:28 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:27:57.647 10:19:28 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:57.647 10:19:28 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79098 00:27:57.647 killing process with pid 79098 00:27:57.647 10:19:28 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:57.647 10:19:28 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:57.647 10:19:28 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79098' 00:27:57.647 10:19:28 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 79098 00:27:57.647 10:19:28 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 79098 00:27:59.026 [2024-12-09 10:19:29.544494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.026 [2024-12-09 10:19:29.544590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:59.026 [2024-12-09 10:19:29.544629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:59.026 [2024-12-09 10:19:29.544647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.026 [2024-12-09 10:19:29.544692] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:27:59.026 [2024-12-09 10:19:29.549156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.026 [2024-12-09 10:19:29.549239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:59.026 [2024-12-09 10:19:29.549267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.429 ms 00:27:59.026 [2024-12-09 10:19:29.549291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.026 [2024-12-09 10:19:29.549679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.026 [2024-12-09 10:19:29.549712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:59.026 [2024-12-09 10:19:29.549749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:27:59.026 [2024-12-09 10:19:29.549763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.026 [2024-12-09 10:19:29.554308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.026 [2024-12-09 10:19:29.554357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:59.026 [2024-12-09 10:19:29.554387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.508 ms 00:27:59.026 [2024-12-09 10:19:29.554402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.026 [2024-12-09 10:19:29.562332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.026 [2024-12-09 10:19:29.562374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:59.026 [2024-12-09 10:19:29.562400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.872 ms 00:27:59.026 [2024-12-09 10:19:29.562415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.026 [2024-12-09 10:19:29.576980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.026 [2024-12-09 10:19:29.577054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:59.026 [2024-12-09 10:19:29.577082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.467 ms 00:27:59.026 [2024-12-09 10:19:29.577096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.026 [2024-12-09 10:19:29.587832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.026 [2024-12-09 10:19:29.588173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:59.026 [2024-12-09 10:19:29.588214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.659 ms 00:27:59.026 [2024-12-09 10:19:29.588232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.026 [2024-12-09 10:19:29.588450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.026 [2024-12-09 10:19:29.588476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:59.026 [2024-12-09 10:19:29.588527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:27:59.026 [2024-12-09 10:19:29.588541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.026 [2024-12-09 10:19:29.603118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.026 [2024-12-09 10:19:29.603163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:59.026 [2024-12-09 10:19:29.603188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.540 ms 00:27:59.026 [2024-12-09 10:19:29.603203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.026 [2024-12-09 10:19:29.616838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.026 [2024-12-09 10:19:29.616911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:59.026 [2024-12-09 10:19:29.616943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.560 ms 00:27:59.026 [2024-12-09 10:19:29.616959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.026 [2024-12-09 10:19:29.629570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.026 [2024-12-09 10:19:29.629612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:59.026 [2024-12-09 10:19:29.629642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.540 ms 00:27:59.026 [2024-12-09 10:19:29.629655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.026 [2024-12-09 10:19:29.642673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.026 [2024-12-09 10:19:29.642715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:59.026 [2024-12-09 10:19:29.642746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.899 ms 00:27:59.026 [2024-12-09 10:19:29.642771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.026 [2024-12-09 10:19:29.642825] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:59.026 [2024-12-09 10:19:29.642896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:59.026 [2024-12-09 10:19:29.642919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:59.026 [2024-12-09 10:19:29.642934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:59.026 [2024-12-09 10:19:29.642985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:59.026 [2024-12-09 10:19:29.643001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:59.026 [2024-12-09 10:19:29.643038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:59.026 [2024-12-09 10:19:29.643064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:59.026 [2024-12-09 10:19:29.643085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:59.026 [2024-12-09 10:19:29.643116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:59.026 [2024-12-09 10:19:29.643134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:59.026 [2024-12-09 10:19:29.643148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:59.026 [2024-12-09 10:19:29.643165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:59.026 [2024-12-09 10:19:29.643180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:59.026 [2024-12-09 10:19:29.643200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:59.026 [2024-12-09 10:19:29.643215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:59.026 [2024-12-09 10:19:29.643233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:59.026 [2024-12-09 10:19:29.643247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:59.026 [2024-12-09 10:19:29.643265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:59.026 [2024-12-09 10:19:29.643279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:59.026 [2024-12-09 10:19:29.643297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:59.026 [2024-12-09 10:19:29.643312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:59.026 [2024-12-09 10:19:29.643346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:59.026 [2024-12-09 10:19:29.643363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:59.026 [2024-12-09 10:19:29.643395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:59.026 [2024-12-09 10:19:29.643410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:59.026 [2024-12-09 10:19:29.643427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:59.026 [2024-12-09 10:19:29.643441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:59.026 [2024-12-09 10:19:29.643459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:59.026 [2024-12-09 10:19:29.643474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:59.026 [2024-12-09 10:19:29.643508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:59.026 [2024-12-09 10:19:29.643522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:59.026 [2024-12-09 10:19:29.643557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:59.026 [2024-12-09 10:19:29.643572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:59.026 [2024-12-09 10:19:29.643589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:59.026 [2024-12-09 10:19:29.643604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:59.026 [2024-12-09 10:19:29.643622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:59.026 [2024-12-09 10:19:29.643636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:59.026 [2024-12-09 10:19:29.643656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:59.026 [2024-12-09 10:19:29.643671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:59.026 [2024-12-09 10:19:29.643711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:59.027 [2024-12-09 10:19:29.643727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:59.027 [2024-12-09 10:19:29.643743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:59.027 [2024-12-09 10:19:29.643757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:59.027 [2024-12-09 10:19:29.643774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:59.027 [2024-12-09 10:19:29.643788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:59.027 [2024-12-09 10:19:29.643804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:59.027 [2024-12-09 10:19:29.643819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:59.027 [2024-12-09 10:19:29.643836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:59.027 [2024-12-09 10:19:29.643849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:59.027 [2024-12-09 10:19:29.643866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:59.027 [2024-12-09 10:19:29.643879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:59.027 [2024-12-09 10:19:29.643913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:59.027 [2024-12-09 10:19:29.643927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:59.027 [2024-12-09 10:19:29.643946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:59.027 [2024-12-09 10:19:29.643976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:59.027 [2024-12-09 10:19:29.643997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:59.027 [2024-12-09 10:19:29.644011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:59.027 [2024-12-09 10:19:29.644029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:59.027 [2024-12-09 10:19:29.644059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:59.027 [2024-12-09 10:19:29.644076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:59.027 [2024-12-09 10:19:29.644089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:59.027 [2024-12-09 10:19:29.644107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:59.027 [2024-12-09 10:19:29.644121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:59.027 [2024-12-09 10:19:29.644138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:59.027 [2024-12-09 10:19:29.644168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:59.027 [2024-12-09 10:19:29.644186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:59.027 [2024-12-09 10:19:29.644200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:59.027 [2024-12-09 10:19:29.644220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:59.027 [2024-12-09 10:19:29.644234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:59.027 [2024-12-09 10:19:29.644257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:59.027 [2024-12-09 10:19:29.644272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:59.027 [2024-12-09 10:19:29.644291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:59.027 [2024-12-09 10:19:29.644306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:59.027 [2024-12-09 10:19:29.644323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:59.027 [2024-12-09 10:19:29.644337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:59.027 [2024-12-09 10:19:29.644354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:59.027 [2024-12-09 10:19:29.644401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:59.027 [2024-12-09 10:19:29.644418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:59.027 [2024-12-09 10:19:29.644432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:59.027 [2024-12-09 10:19:29.644449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:59.027 [2024-12-09 10:19:29.644465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:59.027 [2024-12-09 10:19:29.644482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:59.027 [2024-12-09 10:19:29.644496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:59.027 [2024-12-09 10:19:29.644514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:59.027 [2024-12-09 10:19:29.644528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:59.027 [2024-12-09 10:19:29.644549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:59.027 [2024-12-09 10:19:29.644563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:59.027 [2024-12-09 10:19:29.644599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:59.027 [2024-12-09 10:19:29.644613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:59.027 [2024-12-09 10:19:29.644631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:59.027 [2024-12-09 10:19:29.644645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:59.027 [2024-12-09 10:19:29.644664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:59.027 [2024-12-09 10:19:29.644694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:59.027 [2024-12-09 10:19:29.644728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:59.027 [2024-12-09 10:19:29.644743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:59.027 [2024-12-09 10:19:29.644760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:59.027 [2024-12-09 10:19:29.644775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:59.027 [2024-12-09 10:19:29.644792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:59.027 [2024-12-09 10:19:29.644806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:59.027 [2024-12-09 10:19:29.644823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:59.027 [2024-12-09 10:19:29.644880] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:59.027 [2024-12-09 10:19:29.644907] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1416d288-a52a-4793-951a-3821cfb97ba2 00:27:59.027 [2024-12-09 10:19:29.644926] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:59.027 [2024-12-09 10:19:29.644955] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:59.027 [2024-12-09 10:19:29.644972] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:59.027 [2024-12-09 10:19:29.644990] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:59.027 [2024-12-09 10:19:29.645004] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:59.027 [2024-12-09 10:19:29.645020] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:59.027 [2024-12-09 10:19:29.645050] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:59.027 [2024-12-09 10:19:29.645065] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:59.027 [2024-12-09 10:19:29.645077] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:59.027 [2024-12-09 10:19:29.645093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.027 [2024-12-09 10:19:29.645108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:59.027 [2024-12-09 10:19:29.645125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.274 ms 00:27:59.027 [2024-12-09 10:19:29.645144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.027 [2024-12-09 10:19:29.664907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.027 [2024-12-09 10:19:29.664968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:59.027 [2024-12-09 10:19:29.665004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.709 ms 00:27:59.027 [2024-12-09 10:19:29.665018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.027 [2024-12-09 10:19:29.665680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.027 [2024-12-09 10:19:29.665751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:59.027 [2024-12-09 10:19:29.665796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.570 ms 00:27:59.027 [2024-12-09 10:19:29.665827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.027 [2024-12-09 10:19:29.734788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:59.027 [2024-12-09 10:19:29.735134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:59.027 [2024-12-09 10:19:29.735192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:59.027 [2024-12-09 10:19:29.735210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.027 [2024-12-09 10:19:29.735439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:59.027 [2024-12-09 10:19:29.735463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:59.027 [2024-12-09 10:19:29.735489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:59.027 [2024-12-09 10:19:29.735503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.027 [2024-12-09 10:19:29.735609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:59.027 [2024-12-09 10:19:29.735633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:59.027 [2024-12-09 10:19:29.735656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:59.027 [2024-12-09 10:19:29.735688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.027 [2024-12-09 10:19:29.735741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:59.027 [2024-12-09 10:19:29.735760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:59.027 [2024-12-09 10:19:29.735778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:59.027 [2024-12-09 10:19:29.735811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.287 [2024-12-09 10:19:29.852738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:59.287 [2024-12-09 10:19:29.852824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:59.287 [2024-12-09 10:19:29.852898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:59.287 [2024-12-09 10:19:29.852913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.287 [2024-12-09 10:19:29.931727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:59.287 [2024-12-09 10:19:29.931889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:59.287 [2024-12-09 10:19:29.931951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:59.287 [2024-12-09 10:19:29.931971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.287 [2024-12-09 10:19:29.932170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:59.287 [2024-12-09 10:19:29.932196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:59.287 [2024-12-09 10:19:29.932220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:59.287 [2024-12-09 10:19:29.932236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.287 [2024-12-09 10:19:29.932289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:59.287 [2024-12-09 10:19:29.932309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:59.287 [2024-12-09 10:19:29.932327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:59.287 [2024-12-09 10:19:29.932342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.287 [2024-12-09 10:19:29.932503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:59.287 [2024-12-09 10:19:29.932528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:59.287 [2024-12-09 10:19:29.932548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:59.287 [2024-12-09 10:19:29.932563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.287 [2024-12-09 10:19:29.932642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:59.287 [2024-12-09 10:19:29.932676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:59.287 [2024-12-09 10:19:29.932697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:59.287 [2024-12-09 10:19:29.932712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.287 [2024-12-09 10:19:29.932782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:59.287 [2024-12-09 10:19:29.932804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:59.287 [2024-12-09 10:19:29.932827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:59.287 [2024-12-09 10:19:29.932842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.287 [2024-12-09 10:19:29.932940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:59.287 [2024-12-09 10:19:29.932964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:59.287 [2024-12-09 10:19:29.932985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:59.287 [2024-12-09 10:19:29.933000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.287 [2024-12-09 10:19:29.933221] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 388.694 ms, result 0 00:28:00.665 10:19:31 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:28:00.665 10:19:31 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:00.665 [2024-12-09 10:19:31.219835] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:28:00.665 [2024-12-09 10:19:31.220076] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79167 ] 00:28:00.665 [2024-12-09 10:19:31.393177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:00.923 [2024-12-09 10:19:31.526862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:01.182 [2024-12-09 10:19:31.915672] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:01.182 [2024-12-09 10:19:31.915785] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:01.442 [2024-12-09 10:19:32.088349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.442 [2024-12-09 10:19:32.088649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:01.442 [2024-12-09 10:19:32.088689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:01.442 [2024-12-09 10:19:32.088703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.442 [2024-12-09 10:19:32.092822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.442 [2024-12-09 10:19:32.092883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:01.442 [2024-12-09 10:19:32.092903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.081 ms 00:28:01.442 [2024-12-09 10:19:32.092915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.442 [2024-12-09 10:19:32.093112] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:01.442 [2024-12-09 10:19:32.094124] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:01.442 [2024-12-09 10:19:32.094170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.442 [2024-12-09 10:19:32.094185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:01.442 [2024-12-09 10:19:32.094199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.069 ms 00:28:01.442 [2024-12-09 10:19:32.094211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.442 [2024-12-09 10:19:32.096747] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:01.442 [2024-12-09 10:19:32.115364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.442 [2024-12-09 10:19:32.115409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:01.442 [2024-12-09 10:19:32.115428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.619 ms 00:28:01.442 [2024-12-09 10:19:32.115441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.442 [2024-12-09 10:19:32.115569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.442 [2024-12-09 10:19:32.115594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:01.442 [2024-12-09 10:19:32.115608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:28:01.442 [2024-12-09 10:19:32.115620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.442 [2024-12-09 10:19:32.128090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.442 [2024-12-09 10:19:32.128162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:01.442 [2024-12-09 10:19:32.128192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.403 ms 00:28:01.442 [2024-12-09 10:19:32.128212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.442 [2024-12-09 10:19:32.128506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.442 [2024-12-09 10:19:32.128541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:01.442 [2024-12-09 10:19:32.128576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:28:01.442 [2024-12-09 10:19:32.128596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.442 [2024-12-09 10:19:32.128668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.442 [2024-12-09 10:19:32.128696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:01.442 [2024-12-09 10:19:32.128748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:28:01.442 [2024-12-09 10:19:32.128767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.442 [2024-12-09 10:19:32.128829] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:28:01.442 [2024-12-09 10:19:32.136038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.442 [2024-12-09 10:19:32.136080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:01.442 [2024-12-09 10:19:32.136096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.225 ms 00:28:01.442 [2024-12-09 10:19:32.136107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.442 [2024-12-09 10:19:32.136179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.442 [2024-12-09 10:19:32.136200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:01.442 [2024-12-09 10:19:32.136213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:28:01.442 [2024-12-09 10:19:32.136223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.442 [2024-12-09 10:19:32.136271] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:01.442 [2024-12-09 10:19:32.136312] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:01.442 [2024-12-09 10:19:32.136357] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:01.442 [2024-12-09 10:19:32.136378] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:01.442 [2024-12-09 10:19:32.136478] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:01.442 [2024-12-09 10:19:32.136493] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:01.442 [2024-12-09 10:19:32.136507] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:01.442 [2024-12-09 10:19:32.136527] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:01.442 [2024-12-09 10:19:32.136540] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:01.442 [2024-12-09 10:19:32.136552] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:28:01.442 [2024-12-09 10:19:32.136562] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:01.442 [2024-12-09 10:19:32.136589] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:01.442 [2024-12-09 10:19:32.136600] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:01.442 [2024-12-09 10:19:32.136612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.442 [2024-12-09 10:19:32.136623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:01.442 [2024-12-09 10:19:32.136635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.346 ms 00:28:01.442 [2024-12-09 10:19:32.136645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.442 [2024-12-09 10:19:32.136750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.442 [2024-12-09 10:19:32.136773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:01.442 [2024-12-09 10:19:32.136785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:28:01.442 [2024-12-09 10:19:32.136796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.442 [2024-12-09 10:19:32.136961] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:01.442 [2024-12-09 10:19:32.136985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:01.442 [2024-12-09 10:19:32.136999] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:01.442 [2024-12-09 10:19:32.137012] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:01.442 [2024-12-09 10:19:32.137024] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:01.442 [2024-12-09 10:19:32.137050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:01.442 [2024-12-09 10:19:32.137060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:28:01.442 [2024-12-09 10:19:32.137071] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:01.442 [2024-12-09 10:19:32.137081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:28:01.442 [2024-12-09 10:19:32.137091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:01.442 [2024-12-09 10:19:32.137102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:01.442 [2024-12-09 10:19:32.137127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:28:01.442 [2024-12-09 10:19:32.137138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:01.442 [2024-12-09 10:19:32.137149] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:01.442 [2024-12-09 10:19:32.137160] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:28:01.442 [2024-12-09 10:19:32.137170] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:01.442 [2024-12-09 10:19:32.137180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:01.442 [2024-12-09 10:19:32.137192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:28:01.442 [2024-12-09 10:19:32.137203] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:01.442 [2024-12-09 10:19:32.137229] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:01.442 [2024-12-09 10:19:32.137240] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:28:01.442 [2024-12-09 10:19:32.137253] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:01.442 [2024-12-09 10:19:32.137272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:01.442 [2024-12-09 10:19:32.137291] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:28:01.442 [2024-12-09 10:19:32.137302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:01.442 [2024-12-09 10:19:32.137313] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:01.442 [2024-12-09 10:19:32.137324] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:28:01.442 [2024-12-09 10:19:32.137335] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:01.442 [2024-12-09 10:19:32.137361] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:01.443 [2024-12-09 10:19:32.137372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:28:01.443 [2024-12-09 10:19:32.137382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:01.443 [2024-12-09 10:19:32.137393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:01.443 [2024-12-09 10:19:32.137418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:28:01.443 [2024-12-09 10:19:32.137444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:01.443 [2024-12-09 10:19:32.137456] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:01.443 [2024-12-09 10:19:32.137467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:28:01.443 [2024-12-09 10:19:32.137480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:01.443 [2024-12-09 10:19:32.137490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:01.443 [2024-12-09 10:19:32.137501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:28:01.443 [2024-12-09 10:19:32.137511] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:01.443 [2024-12-09 10:19:32.137521] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:01.443 [2024-12-09 10:19:32.137531] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:28:01.443 [2024-12-09 10:19:32.137542] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:01.443 [2024-12-09 10:19:32.137552] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:01.443 [2024-12-09 10:19:32.137564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:01.443 [2024-12-09 10:19:32.137582] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:01.443 [2024-12-09 10:19:32.137594] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:01.443 [2024-12-09 10:19:32.137606] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:01.443 [2024-12-09 10:19:32.137617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:01.443 [2024-12-09 10:19:32.137629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:01.443 [2024-12-09 10:19:32.137641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:01.443 [2024-12-09 10:19:32.137651] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:01.443 [2024-12-09 10:19:32.137677] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:01.443 [2024-12-09 10:19:32.137689] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:01.443 [2024-12-09 10:19:32.137718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:01.443 [2024-12-09 10:19:32.137731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:28:01.443 [2024-12-09 10:19:32.137742] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:28:01.443 [2024-12-09 10:19:32.137759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:28:01.443 [2024-12-09 10:19:32.137769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:28:01.443 [2024-12-09 10:19:32.137795] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:28:01.443 [2024-12-09 10:19:32.137821] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:28:01.443 [2024-12-09 10:19:32.137832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:28:01.443 [2024-12-09 10:19:32.137843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:28:01.443 [2024-12-09 10:19:32.137853] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:28:01.443 [2024-12-09 10:19:32.137864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:28:01.443 [2024-12-09 10:19:32.137874] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:28:01.443 [2024-12-09 10:19:32.137884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:28:01.443 [2024-12-09 10:19:32.137895] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:28:01.443 [2024-12-09 10:19:32.137906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:28:01.443 [2024-12-09 10:19:32.137916] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:01.443 [2024-12-09 10:19:32.137928] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:01.443 [2024-12-09 10:19:32.137955] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:01.443 [2024-12-09 10:19:32.137982] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:01.443 [2024-12-09 10:19:32.137992] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:01.443 [2024-12-09 10:19:32.138002] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:01.443 [2024-12-09 10:19:32.138030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.443 [2024-12-09 10:19:32.138050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:01.443 [2024-12-09 10:19:32.138062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.182 ms 00:28:01.443 [2024-12-09 10:19:32.138073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.443 [2024-12-09 10:19:32.187328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.443 [2024-12-09 10:19:32.187411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:01.443 [2024-12-09 10:19:32.187431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.119 ms 00:28:01.443 [2024-12-09 10:19:32.187445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.443 [2024-12-09 10:19:32.187736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.443 [2024-12-09 10:19:32.187758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:01.443 [2024-12-09 10:19:32.187771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:28:01.443 [2024-12-09 10:19:32.187783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.702 [2024-12-09 10:19:32.250841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.702 [2024-12-09 10:19:32.250919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:01.702 [2024-12-09 10:19:32.250941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.022 ms 00:28:01.702 [2024-12-09 10:19:32.250954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.702 [2024-12-09 10:19:32.251137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.702 [2024-12-09 10:19:32.251160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:01.702 [2024-12-09 10:19:32.251175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:01.702 [2024-12-09 10:19:32.251189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.702 [2024-12-09 10:19:32.252101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.702 [2024-12-09 10:19:32.252133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:01.702 [2024-12-09 10:19:32.252158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.877 ms 00:28:01.702 [2024-12-09 10:19:32.252170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.702 [2024-12-09 10:19:32.252418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.702 [2024-12-09 10:19:32.252440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:01.702 [2024-12-09 10:19:32.252453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.205 ms 00:28:01.702 [2024-12-09 10:19:32.252466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.702 [2024-12-09 10:19:32.277080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.702 [2024-12-09 10:19:32.277134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:01.702 [2024-12-09 10:19:32.277154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.578 ms 00:28:01.702 [2024-12-09 10:19:32.277167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.702 [2024-12-09 10:19:32.296621] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:28:01.702 [2024-12-09 10:19:32.296666] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:01.702 [2024-12-09 10:19:32.296687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.702 [2024-12-09 10:19:32.296700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:01.702 [2024-12-09 10:19:32.296714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.322 ms 00:28:01.702 [2024-12-09 10:19:32.296756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.702 [2024-12-09 10:19:32.332245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.702 [2024-12-09 10:19:32.332303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:01.702 [2024-12-09 10:19:32.332321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.308 ms 00:28:01.702 [2024-12-09 10:19:32.332334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.702 [2024-12-09 10:19:32.349241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.702 [2024-12-09 10:19:32.349284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:01.702 [2024-12-09 10:19:32.349302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.807 ms 00:28:01.702 [2024-12-09 10:19:32.349328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.702 [2024-12-09 10:19:32.366006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.702 [2024-12-09 10:19:32.366079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:01.702 [2024-12-09 10:19:32.366116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.549 ms 00:28:01.702 [2024-12-09 10:19:32.366128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.702 [2024-12-09 10:19:32.367075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.702 [2024-12-09 10:19:32.367315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:01.702 [2024-12-09 10:19:32.367360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.791 ms 00:28:01.702 [2024-12-09 10:19:32.367390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.702 [2024-12-09 10:19:32.458069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.702 [2024-12-09 10:19:32.458394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:01.702 [2024-12-09 10:19:32.458456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 90.620 ms 00:28:01.703 [2024-12-09 10:19:32.458471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.703 [2024-12-09 10:19:32.469965] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:28:01.703 [2024-12-09 10:19:32.496609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.703 [2024-12-09 10:19:32.496943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:01.703 [2024-12-09 10:19:32.496979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.966 ms 00:28:01.703 [2024-12-09 10:19:32.497013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.703 [2024-12-09 10:19:32.497188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.703 [2024-12-09 10:19:32.497210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:01.703 [2024-12-09 10:19:32.497224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:28:01.703 [2024-12-09 10:19:32.497251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.703 [2024-12-09 10:19:32.497340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.703 [2024-12-09 10:19:32.497358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:01.703 [2024-12-09 10:19:32.497369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:28:01.703 [2024-12-09 10:19:32.497387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.703 [2024-12-09 10:19:32.497438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.703 [2024-12-09 10:19:32.497456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:01.703 [2024-12-09 10:19:32.497468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:28:01.703 [2024-12-09 10:19:32.497479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.703 [2024-12-09 10:19:32.497530] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:01.703 [2024-12-09 10:19:32.497548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.703 [2024-12-09 10:19:32.497559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:01.703 [2024-12-09 10:19:32.497571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:28:01.703 [2024-12-09 10:19:32.497581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.961 [2024-12-09 10:19:32.526000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.961 [2024-12-09 10:19:32.526054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:01.961 [2024-12-09 10:19:32.526073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.389 ms 00:28:01.961 [2024-12-09 10:19:32.526091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.961 [2024-12-09 10:19:32.526253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.961 [2024-12-09 10:19:32.526276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:01.961 [2024-12-09 10:19:32.526290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:28:01.961 [2024-12-09 10:19:32.526301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.961 [2024-12-09 10:19:32.527877] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:01.961 [2024-12-09 10:19:32.531981] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 439.135 ms, result 0 00:28:01.961 [2024-12-09 10:19:32.533002] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:01.961 [2024-12-09 10:19:32.547754] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:02.897  [2024-12-09T10:19:34.628Z] Copying: 26/256 [MB] (26 MBps) [2024-12-09T10:19:35.576Z] Copying: 50/256 [MB] (24 MBps) [2024-12-09T10:19:36.950Z] Copying: 74/256 [MB] (23 MBps) [2024-12-09T10:19:37.885Z] Copying: 98/256 [MB] (23 MBps) [2024-12-09T10:19:38.821Z] Copying: 120/256 [MB] (22 MBps) [2024-12-09T10:19:39.758Z] Copying: 141/256 [MB] (21 MBps) [2024-12-09T10:19:40.695Z] Copying: 162/256 [MB] (20 MBps) [2024-12-09T10:19:41.631Z] Copying: 183/256 [MB] (20 MBps) [2024-12-09T10:19:42.568Z] Copying: 204/256 [MB] (20 MBps) [2024-12-09T10:19:43.946Z] Copying: 224/256 [MB] (20 MBps) [2024-12-09T10:19:44.205Z] Copying: 245/256 [MB] (20 MBps) [2024-12-09T10:19:44.205Z] Copying: 256/256 [MB] (average 22 MBps)[2024-12-09 10:19:44.051226] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:13.408 [2024-12-09 10:19:44.064688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.408 [2024-12-09 10:19:44.064763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:13.408 [2024-12-09 10:19:44.064803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:13.408 [2024-12-09 10:19:44.064816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.408 [2024-12-09 10:19:44.064909] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:28:13.408 [2024-12-09 10:19:44.069077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.408 [2024-12-09 10:19:44.069120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:13.408 [2024-12-09 10:19:44.069139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.127 ms 00:28:13.408 [2024-12-09 10:19:44.069152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.408 [2024-12-09 10:19:44.069513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.408 [2024-12-09 10:19:44.069533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:13.408 [2024-12-09 10:19:44.069545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.324 ms 00:28:13.408 [2024-12-09 10:19:44.069556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.408 [2024-12-09 10:19:44.073483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.408 [2024-12-09 10:19:44.073536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:13.408 [2024-12-09 10:19:44.073582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.873 ms 00:28:13.408 [2024-12-09 10:19:44.073593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.408 [2024-12-09 10:19:44.081297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.408 [2024-12-09 10:19:44.081349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:13.408 [2024-12-09 10:19:44.081366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.660 ms 00:28:13.408 [2024-12-09 10:19:44.081377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.408 [2024-12-09 10:19:44.111979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.408 [2024-12-09 10:19:44.112056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:13.408 [2024-12-09 10:19:44.112091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.476 ms 00:28:13.408 [2024-12-09 10:19:44.112103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.408 [2024-12-09 10:19:44.131819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.408 [2024-12-09 10:19:44.132080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:13.408 [2024-12-09 10:19:44.132148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.619 ms 00:28:13.408 [2024-12-09 10:19:44.132174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.408 [2024-12-09 10:19:44.132408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.408 [2024-12-09 10:19:44.132456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:13.408 [2024-12-09 10:19:44.132492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:28:13.408 [2024-12-09 10:19:44.132518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.408 [2024-12-09 10:19:44.164184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.408 [2024-12-09 10:19:44.164244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:13.408 [2024-12-09 10:19:44.164278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.640 ms 00:28:13.408 [2024-12-09 10:19:44.164289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.408 [2024-12-09 10:19:44.194956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.408 [2024-12-09 10:19:44.195182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:13.408 [2024-12-09 10:19:44.195220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.594 ms 00:28:13.408 [2024-12-09 10:19:44.195246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.668 [2024-12-09 10:19:44.226335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.668 [2024-12-09 10:19:44.226391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:13.668 [2024-12-09 10:19:44.226412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.993 ms 00:28:13.668 [2024-12-09 10:19:44.226431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.668 [2024-12-09 10:19:44.258097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.668 [2024-12-09 10:19:44.258179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:13.668 [2024-12-09 10:19:44.258201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.527 ms 00:28:13.668 [2024-12-09 10:19:44.258213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.668 [2024-12-09 10:19:44.258319] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:13.668 [2024-12-09 10:19:44.258349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:13.668 [2024-12-09 10:19:44.258365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:13.668 [2024-12-09 10:19:44.258378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:13.668 [2024-12-09 10:19:44.258390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:13.668 [2024-12-09 10:19:44.258403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:13.668 [2024-12-09 10:19:44.258415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:13.668 [2024-12-09 10:19:44.258428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:13.668 [2024-12-09 10:19:44.258440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:13.668 [2024-12-09 10:19:44.258453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:13.668 [2024-12-09 10:19:44.258465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:13.668 [2024-12-09 10:19:44.258478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:13.668 [2024-12-09 10:19:44.258489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:13.668 [2024-12-09 10:19:44.258501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:13.668 [2024-12-09 10:19:44.258513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:13.668 [2024-12-09 10:19:44.258525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:13.668 [2024-12-09 10:19:44.258538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:13.668 [2024-12-09 10:19:44.258550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:13.668 [2024-12-09 10:19:44.258562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:13.668 [2024-12-09 10:19:44.258573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:13.668 [2024-12-09 10:19:44.258585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:13.668 [2024-12-09 10:19:44.258598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:13.668 [2024-12-09 10:19:44.258610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:13.668 [2024-12-09 10:19:44.258622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:13.668 [2024-12-09 10:19:44.258635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:13.668 [2024-12-09 10:19:44.258647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:13.668 [2024-12-09 10:19:44.258659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:13.668 [2024-12-09 10:19:44.258674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:13.668 [2024-12-09 10:19:44.258688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:13.668 [2024-12-09 10:19:44.258701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:13.668 [2024-12-09 10:19:44.258713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:13.668 [2024-12-09 10:19:44.258726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:13.668 [2024-12-09 10:19:44.258738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:13.668 [2024-12-09 10:19:44.258752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:13.668 [2024-12-09 10:19:44.258765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:13.668 [2024-12-09 10:19:44.258777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:13.668 [2024-12-09 10:19:44.258789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:13.668 [2024-12-09 10:19:44.258801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:13.668 [2024-12-09 10:19:44.258814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:13.668 [2024-12-09 10:19:44.258845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:13.668 [2024-12-09 10:19:44.258861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:13.668 [2024-12-09 10:19:44.258874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:13.668 [2024-12-09 10:19:44.258886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:13.668 [2024-12-09 10:19:44.258899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:13.668 [2024-12-09 10:19:44.258911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:13.668 [2024-12-09 10:19:44.258923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:13.668 [2024-12-09 10:19:44.258935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:13.668 [2024-12-09 10:19:44.258946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:13.668 [2024-12-09 10:19:44.258958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:13.669 [2024-12-09 10:19:44.258970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:13.669 [2024-12-09 10:19:44.258982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:13.669 [2024-12-09 10:19:44.258994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:13.669 [2024-12-09 10:19:44.259006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:13.669 [2024-12-09 10:19:44.259018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:13.669 [2024-12-09 10:19:44.259031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:13.669 [2024-12-09 10:19:44.259042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:13.669 [2024-12-09 10:19:44.259067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:13.669 [2024-12-09 10:19:44.259080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:13.669 [2024-12-09 10:19:44.259092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:13.669 [2024-12-09 10:19:44.259104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:13.669 [2024-12-09 10:19:44.259117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:13.669 [2024-12-09 10:19:44.259129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:13.669 [2024-12-09 10:19:44.259141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:13.669 [2024-12-09 10:19:44.259152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:13.669 [2024-12-09 10:19:44.259164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:13.669 [2024-12-09 10:19:44.259177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:13.669 [2024-12-09 10:19:44.259188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:13.669 [2024-12-09 10:19:44.259200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:13.669 [2024-12-09 10:19:44.259211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:13.669 [2024-12-09 10:19:44.259222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:13.669 [2024-12-09 10:19:44.259234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:13.669 [2024-12-09 10:19:44.259246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:13.669 [2024-12-09 10:19:44.259258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:13.669 [2024-12-09 10:19:44.259269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:13.669 [2024-12-09 10:19:44.259281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:13.669 [2024-12-09 10:19:44.259293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:13.669 [2024-12-09 10:19:44.259304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:13.669 [2024-12-09 10:19:44.259316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:13.669 [2024-12-09 10:19:44.259327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:13.669 [2024-12-09 10:19:44.259338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:13.669 [2024-12-09 10:19:44.259350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:13.669 [2024-12-09 10:19:44.259364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:13.669 [2024-12-09 10:19:44.259384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:13.669 [2024-12-09 10:19:44.259403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:13.669 [2024-12-09 10:19:44.259416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:13.669 [2024-12-09 10:19:44.259428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:13.669 [2024-12-09 10:19:44.259448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:13.669 [2024-12-09 10:19:44.259468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:13.669 [2024-12-09 10:19:44.259483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:13.669 [2024-12-09 10:19:44.259500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:13.669 [2024-12-09 10:19:44.259519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:13.669 [2024-12-09 10:19:44.259543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:13.669 [2024-12-09 10:19:44.259566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:13.669 [2024-12-09 10:19:44.259589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:13.669 [2024-12-09 10:19:44.259645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:13.669 [2024-12-09 10:19:44.259665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:13.669 [2024-12-09 10:19:44.259682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:13.669 [2024-12-09 10:19:44.259695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:13.669 [2024-12-09 10:19:44.259707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:13.669 [2024-12-09 10:19:44.259720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:13.669 [2024-12-09 10:19:44.259741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:13.669 [2024-12-09 10:19:44.259773] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:13.669 [2024-12-09 10:19:44.259797] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1416d288-a52a-4793-951a-3821cfb97ba2 00:28:13.669 [2024-12-09 10:19:44.259821] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:13.669 [2024-12-09 10:19:44.259867] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:13.669 [2024-12-09 10:19:44.259889] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:13.669 [2024-12-09 10:19:44.259911] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:13.669 [2024-12-09 10:19:44.259923] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:13.669 [2024-12-09 10:19:44.259936] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:13.669 [2024-12-09 10:19:44.259963] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:13.669 [2024-12-09 10:19:44.259974] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:13.669 [2024-12-09 10:19:44.259991] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:13.669 [2024-12-09 10:19:44.260013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.669 [2024-12-09 10:19:44.260036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:13.669 [2024-12-09 10:19:44.260061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.695 ms 00:28:13.669 [2024-12-09 10:19:44.260083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.669 [2024-12-09 10:19:44.277616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.669 [2024-12-09 10:19:44.277683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:13.669 [2024-12-09 10:19:44.277720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.475 ms 00:28:13.669 [2024-12-09 10:19:44.277732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.669 [2024-12-09 10:19:44.278419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.669 [2024-12-09 10:19:44.278478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:13.669 [2024-12-09 10:19:44.278509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.562 ms 00:28:13.669 [2024-12-09 10:19:44.278533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.669 [2024-12-09 10:19:44.329244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:13.669 [2024-12-09 10:19:44.329344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:13.669 [2024-12-09 10:19:44.329366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:13.669 [2024-12-09 10:19:44.329394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.669 [2024-12-09 10:19:44.329585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:13.669 [2024-12-09 10:19:44.329607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:13.669 [2024-12-09 10:19:44.329621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:13.669 [2024-12-09 10:19:44.329633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.669 [2024-12-09 10:19:44.329712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:13.669 [2024-12-09 10:19:44.329732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:13.669 [2024-12-09 10:19:44.329745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:13.669 [2024-12-09 10:19:44.329757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.669 [2024-12-09 10:19:44.329801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:13.669 [2024-12-09 10:19:44.329822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:13.669 [2024-12-09 10:19:44.329859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:13.669 [2024-12-09 10:19:44.329871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.669 [2024-12-09 10:19:44.450652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:13.669 [2024-12-09 10:19:44.450739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:13.669 [2024-12-09 10:19:44.450761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:13.669 [2024-12-09 10:19:44.450774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.928 [2024-12-09 10:19:44.541287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:13.928 [2024-12-09 10:19:44.541380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:13.928 [2024-12-09 10:19:44.541407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:13.928 [2024-12-09 10:19:44.541420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.928 [2024-12-09 10:19:44.541524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:13.928 [2024-12-09 10:19:44.541544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:13.928 [2024-12-09 10:19:44.541558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:13.928 [2024-12-09 10:19:44.541570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.928 [2024-12-09 10:19:44.541610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:13.928 [2024-12-09 10:19:44.541656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:13.928 [2024-12-09 10:19:44.541669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:13.928 [2024-12-09 10:19:44.541681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.928 [2024-12-09 10:19:44.541823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:13.928 [2024-12-09 10:19:44.541873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:13.928 [2024-12-09 10:19:44.541888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:13.928 [2024-12-09 10:19:44.541900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.928 [2024-12-09 10:19:44.541959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:13.928 [2024-12-09 10:19:44.541978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:13.928 [2024-12-09 10:19:44.542006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:13.928 [2024-12-09 10:19:44.542019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.928 [2024-12-09 10:19:44.542082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:13.928 [2024-12-09 10:19:44.542115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:13.928 [2024-12-09 10:19:44.542128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:13.928 [2024-12-09 10:19:44.542140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.928 [2024-12-09 10:19:44.542200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:13.928 [2024-12-09 10:19:44.542231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:13.928 [2024-12-09 10:19:44.542244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:13.928 [2024-12-09 10:19:44.542255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.928 [2024-12-09 10:19:44.542455] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 477.757 ms, result 0 00:28:14.863 00:28:14.864 00:28:15.122 10:19:45 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:28:15.122 10:19:45 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:28:15.690 10:19:46 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:15.690 [2024-12-09 10:19:46.427782] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:28:15.690 [2024-12-09 10:19:46.428049] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79327 ] 00:28:15.949 [2024-12-09 10:19:46.616823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:16.208 [2024-12-09 10:19:46.749296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:16.466 [2024-12-09 10:19:47.147328] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:16.466 [2024-12-09 10:19:47.147658] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:16.725 [2024-12-09 10:19:47.313257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.725 [2024-12-09 10:19:47.313333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:16.725 [2024-12-09 10:19:47.313358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:16.725 [2024-12-09 10:19:47.313371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.725 [2024-12-09 10:19:47.316903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.725 [2024-12-09 10:19:47.316949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:16.726 [2024-12-09 10:19:47.316968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.503 ms 00:28:16.726 [2024-12-09 10:19:47.316980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.726 [2024-12-09 10:19:47.317121] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:16.726 [2024-12-09 10:19:47.318099] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:16.726 [2024-12-09 10:19:47.318142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.726 [2024-12-09 10:19:47.318158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:16.726 [2024-12-09 10:19:47.318171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.032 ms 00:28:16.726 [2024-12-09 10:19:47.318183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.726 [2024-12-09 10:19:47.320596] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:16.726 [2024-12-09 10:19:47.338055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.726 [2024-12-09 10:19:47.338132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:16.726 [2024-12-09 10:19:47.338153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.460 ms 00:28:16.726 [2024-12-09 10:19:47.338166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.726 [2024-12-09 10:19:47.338296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.726 [2024-12-09 10:19:47.338318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:16.726 [2024-12-09 10:19:47.338333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:28:16.726 [2024-12-09 10:19:47.338345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.726 [2024-12-09 10:19:47.349345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.726 [2024-12-09 10:19:47.349400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:16.726 [2024-12-09 10:19:47.349419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.933 ms 00:28:16.726 [2024-12-09 10:19:47.349431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.726 [2024-12-09 10:19:47.349611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.726 [2024-12-09 10:19:47.349634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:16.726 [2024-12-09 10:19:47.349647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:28:16.726 [2024-12-09 10:19:47.349660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.726 [2024-12-09 10:19:47.349707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.726 [2024-12-09 10:19:47.349724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:16.726 [2024-12-09 10:19:47.349737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:28:16.726 [2024-12-09 10:19:47.349749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.726 [2024-12-09 10:19:47.349785] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:28:16.726 [2024-12-09 10:19:47.355176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.726 [2024-12-09 10:19:47.355216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:16.726 [2024-12-09 10:19:47.355233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.403 ms 00:28:16.726 [2024-12-09 10:19:47.355244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.726 [2024-12-09 10:19:47.355318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.726 [2024-12-09 10:19:47.355338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:16.726 [2024-12-09 10:19:47.355351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:28:16.726 [2024-12-09 10:19:47.355362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.726 [2024-12-09 10:19:47.355401] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:16.726 [2024-12-09 10:19:47.355435] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:16.726 [2024-12-09 10:19:47.355480] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:16.726 [2024-12-09 10:19:47.355503] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:16.726 [2024-12-09 10:19:47.355615] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:16.726 [2024-12-09 10:19:47.355631] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:16.726 [2024-12-09 10:19:47.355647] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:16.726 [2024-12-09 10:19:47.355667] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:16.726 [2024-12-09 10:19:47.355681] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:16.726 [2024-12-09 10:19:47.355694] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:28:16.726 [2024-12-09 10:19:47.355706] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:16.726 [2024-12-09 10:19:47.355717] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:16.726 [2024-12-09 10:19:47.355729] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:16.726 [2024-12-09 10:19:47.355742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.726 [2024-12-09 10:19:47.355754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:16.726 [2024-12-09 10:19:47.355766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.344 ms 00:28:16.726 [2024-12-09 10:19:47.355777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.726 [2024-12-09 10:19:47.355903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.726 [2024-12-09 10:19:47.355929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:16.726 [2024-12-09 10:19:47.355942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:28:16.726 [2024-12-09 10:19:47.355954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.726 [2024-12-09 10:19:47.356078] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:16.726 [2024-12-09 10:19:47.356098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:16.726 [2024-12-09 10:19:47.356111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:16.726 [2024-12-09 10:19:47.356123] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:16.726 [2024-12-09 10:19:47.356135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:16.726 [2024-12-09 10:19:47.356145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:16.726 [2024-12-09 10:19:47.356156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:28:16.726 [2024-12-09 10:19:47.356168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:16.726 [2024-12-09 10:19:47.356179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:28:16.726 [2024-12-09 10:19:47.356190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:16.726 [2024-12-09 10:19:47.356200] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:16.726 [2024-12-09 10:19:47.356225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:28:16.726 [2024-12-09 10:19:47.356236] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:16.726 [2024-12-09 10:19:47.356247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:16.726 [2024-12-09 10:19:47.356268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:28:16.726 [2024-12-09 10:19:47.356279] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:16.726 [2024-12-09 10:19:47.356290] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:16.726 [2024-12-09 10:19:47.356301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:28:16.726 [2024-12-09 10:19:47.356311] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:16.726 [2024-12-09 10:19:47.356322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:16.726 [2024-12-09 10:19:47.356332] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:28:16.726 [2024-12-09 10:19:47.356343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:16.726 [2024-12-09 10:19:47.356353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:16.726 [2024-12-09 10:19:47.356363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:28:16.726 [2024-12-09 10:19:47.356373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:16.726 [2024-12-09 10:19:47.356390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:16.726 [2024-12-09 10:19:47.356400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:28:16.726 [2024-12-09 10:19:47.356412] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:16.726 [2024-12-09 10:19:47.356423] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:16.726 [2024-12-09 10:19:47.356434] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:28:16.726 [2024-12-09 10:19:47.356446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:16.726 [2024-12-09 10:19:47.356456] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:16.726 [2024-12-09 10:19:47.356467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:28:16.726 [2024-12-09 10:19:47.356478] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:16.726 [2024-12-09 10:19:47.356489] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:16.726 [2024-12-09 10:19:47.356499] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:28:16.726 [2024-12-09 10:19:47.356509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:16.726 [2024-12-09 10:19:47.356520] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:16.726 [2024-12-09 10:19:47.356531] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:28:16.726 [2024-12-09 10:19:47.356542] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:16.726 [2024-12-09 10:19:47.356552] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:16.726 [2024-12-09 10:19:47.356563] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:28:16.726 [2024-12-09 10:19:47.356574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:16.726 [2024-12-09 10:19:47.356586] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:16.726 [2024-12-09 10:19:47.356598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:16.726 [2024-12-09 10:19:47.356615] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:16.726 [2024-12-09 10:19:47.356627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:16.726 [2024-12-09 10:19:47.356638] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:16.726 [2024-12-09 10:19:47.356649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:16.726 [2024-12-09 10:19:47.356661] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:16.726 [2024-12-09 10:19:47.356672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:16.726 [2024-12-09 10:19:47.356682] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:16.726 [2024-12-09 10:19:47.356693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:16.726 [2024-12-09 10:19:47.356706] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:16.726 [2024-12-09 10:19:47.356721] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:16.726 [2024-12-09 10:19:47.356734] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:28:16.726 [2024-12-09 10:19:47.356746] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:28:16.726 [2024-12-09 10:19:47.356758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:28:16.727 [2024-12-09 10:19:47.356769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:28:16.727 [2024-12-09 10:19:47.356784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:28:16.727 [2024-12-09 10:19:47.356797] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:28:16.727 [2024-12-09 10:19:47.356809] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:28:16.727 [2024-12-09 10:19:47.356820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:28:16.727 [2024-12-09 10:19:47.356855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:28:16.727 [2024-12-09 10:19:47.356869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:28:16.727 [2024-12-09 10:19:47.356896] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:28:16.727 [2024-12-09 10:19:47.356909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:28:16.727 [2024-12-09 10:19:47.356921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:28:16.727 [2024-12-09 10:19:47.356934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:28:16.727 [2024-12-09 10:19:47.356945] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:16.727 [2024-12-09 10:19:47.356964] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:16.727 [2024-12-09 10:19:47.356986] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:16.727 [2024-12-09 10:19:47.357004] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:16.727 [2024-12-09 10:19:47.357023] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:16.727 [2024-12-09 10:19:47.357056] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:16.727 [2024-12-09 10:19:47.357075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.727 [2024-12-09 10:19:47.357096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:16.727 [2024-12-09 10:19:47.357109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.067 ms 00:28:16.727 [2024-12-09 10:19:47.357120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.727 [2024-12-09 10:19:47.398637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.727 [2024-12-09 10:19:47.398712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:16.727 [2024-12-09 10:19:47.398751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.426 ms 00:28:16.727 [2024-12-09 10:19:47.398765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.727 [2024-12-09 10:19:47.399017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.727 [2024-12-09 10:19:47.399040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:16.727 [2024-12-09 10:19:47.399054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:28:16.727 [2024-12-09 10:19:47.399065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.727 [2024-12-09 10:19:47.459175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.727 [2024-12-09 10:19:47.459260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:16.727 [2024-12-09 10:19:47.459289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.072 ms 00:28:16.727 [2024-12-09 10:19:47.459302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.727 [2024-12-09 10:19:47.459512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.727 [2024-12-09 10:19:47.459534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:16.727 [2024-12-09 10:19:47.459548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:16.727 [2024-12-09 10:19:47.459560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.727 [2024-12-09 10:19:47.460169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.727 [2024-12-09 10:19:47.460190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:16.727 [2024-12-09 10:19:47.460212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.574 ms 00:28:16.727 [2024-12-09 10:19:47.460224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.727 [2024-12-09 10:19:47.460408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.727 [2024-12-09 10:19:47.460444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:16.727 [2024-12-09 10:19:47.460459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.150 ms 00:28:16.727 [2024-12-09 10:19:47.460471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.727 [2024-12-09 10:19:47.480623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.727 [2024-12-09 10:19:47.480701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:16.727 [2024-12-09 10:19:47.480725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.116 ms 00:28:16.727 [2024-12-09 10:19:47.480737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.727 [2024-12-09 10:19:47.498646] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:28:16.727 [2024-12-09 10:19:47.498728] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:16.727 [2024-12-09 10:19:47.498753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.727 [2024-12-09 10:19:47.498767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:16.727 [2024-12-09 10:19:47.498785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.766 ms 00:28:16.727 [2024-12-09 10:19:47.498797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.985 [2024-12-09 10:19:47.530577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.985 [2024-12-09 10:19:47.530698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:16.985 [2024-12-09 10:19:47.530738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.554 ms 00:28:16.985 [2024-12-09 10:19:47.530751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.985 [2024-12-09 10:19:47.549069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.985 [2024-12-09 10:19:47.549444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:16.985 [2024-12-09 10:19:47.549479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.113 ms 00:28:16.985 [2024-12-09 10:19:47.549492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.985 [2024-12-09 10:19:47.567021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.985 [2024-12-09 10:19:47.567402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:16.985 [2024-12-09 10:19:47.567436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.338 ms 00:28:16.985 [2024-12-09 10:19:47.567449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.985 [2024-12-09 10:19:47.568548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.985 [2024-12-09 10:19:47.568586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:16.985 [2024-12-09 10:19:47.568604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.843 ms 00:28:16.985 [2024-12-09 10:19:47.568616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.985 [2024-12-09 10:19:47.653178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.985 [2024-12-09 10:19:47.653611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:16.985 [2024-12-09 10:19:47.653646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.521 ms 00:28:16.985 [2024-12-09 10:19:47.653660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.985 [2024-12-09 10:19:47.670808] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:28:16.985 [2024-12-09 10:19:47.694439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.985 [2024-12-09 10:19:47.694553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:16.985 [2024-12-09 10:19:47.694575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.552 ms 00:28:16.985 [2024-12-09 10:19:47.694601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.985 [2024-12-09 10:19:47.694818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.985 [2024-12-09 10:19:47.694839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:16.985 [2024-12-09 10:19:47.694852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:16.985 [2024-12-09 10:19:47.694926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.985 [2024-12-09 10:19:47.695038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.985 [2024-12-09 10:19:47.695057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:16.985 [2024-12-09 10:19:47.695071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:28:16.985 [2024-12-09 10:19:47.695090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.985 [2024-12-09 10:19:47.695143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.985 [2024-12-09 10:19:47.695163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:16.985 [2024-12-09 10:19:47.695177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:28:16.985 [2024-12-09 10:19:47.695189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.985 [2024-12-09 10:19:47.695241] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:16.985 [2024-12-09 10:19:47.695259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.985 [2024-12-09 10:19:47.695272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:16.985 [2024-12-09 10:19:47.695284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:28:16.985 [2024-12-09 10:19:47.695311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.985 [2024-12-09 10:19:47.729197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.985 [2024-12-09 10:19:47.729304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:16.985 [2024-12-09 10:19:47.729343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.806 ms 00:28:16.985 [2024-12-09 10:19:47.729371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.985 [2024-12-09 10:19:47.729606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.985 [2024-12-09 10:19:47.729643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:16.985 [2024-12-09 10:19:47.729657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:28:16.985 [2024-12-09 10:19:47.729669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.985 [2024-12-09 10:19:47.731072] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:16.985 [2024-12-09 10:19:47.736473] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 417.343 ms, result 0 00:28:16.985 [2024-12-09 10:19:47.737517] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:16.985 [2024-12-09 10:19:47.754769] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:17.244  [2024-12-09T10:19:48.041Z] Copying: 4096/4096 [kB] (average 21 MBps)[2024-12-09 10:19:47.948935] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:17.244 [2024-12-09 10:19:47.962024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.244 [2024-12-09 10:19:47.962108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:17.244 [2024-12-09 10:19:47.962140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:17.244 [2024-12-09 10:19:47.962153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.244 [2024-12-09 10:19:47.962188] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:28:17.244 [2024-12-09 10:19:47.966068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.244 [2024-12-09 10:19:47.966108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:17.244 [2024-12-09 10:19:47.966124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.857 ms 00:28:17.244 [2024-12-09 10:19:47.966136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.244 [2024-12-09 10:19:47.968006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.244 [2024-12-09 10:19:47.968047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:17.244 [2024-12-09 10:19:47.968064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.839 ms 00:28:17.244 [2024-12-09 10:19:47.968076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.244 [2024-12-09 10:19:47.972048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.244 [2024-12-09 10:19:47.972087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:17.244 [2024-12-09 10:19:47.972113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.940 ms 00:28:17.244 [2024-12-09 10:19:47.972125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.244 [2024-12-09 10:19:47.979918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.244 [2024-12-09 10:19:47.979979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:17.244 [2024-12-09 10:19:47.979997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.739 ms 00:28:17.244 [2024-12-09 10:19:47.980009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.244 [2024-12-09 10:19:48.011749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.244 [2024-12-09 10:19:48.011800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:17.244 [2024-12-09 10:19:48.011833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.662 ms 00:28:17.244 [2024-12-09 10:19:48.011875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.244 [2024-12-09 10:19:48.029965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.244 [2024-12-09 10:19:48.030042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:17.244 [2024-12-09 10:19:48.030060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.989 ms 00:28:17.244 [2024-12-09 10:19:48.030094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.244 [2024-12-09 10:19:48.030279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.244 [2024-12-09 10:19:48.030299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:17.244 [2024-12-09 10:19:48.030329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:28:17.244 [2024-12-09 10:19:48.030342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.504 [2024-12-09 10:19:48.062606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.504 [2024-12-09 10:19:48.062683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:17.504 [2024-12-09 10:19:48.062718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.239 ms 00:28:17.504 [2024-12-09 10:19:48.062729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.504 [2024-12-09 10:19:48.094346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.504 [2024-12-09 10:19:48.094394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:17.504 [2024-12-09 10:19:48.094413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.534 ms 00:28:17.504 [2024-12-09 10:19:48.094439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.504 [2024-12-09 10:19:48.124966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.504 [2024-12-09 10:19:48.125021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:17.504 [2024-12-09 10:19:48.125041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.424 ms 00:28:17.504 [2024-12-09 10:19:48.125053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.504 [2024-12-09 10:19:48.155652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.504 [2024-12-09 10:19:48.155724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:17.504 [2024-12-09 10:19:48.155758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.491 ms 00:28:17.504 [2024-12-09 10:19:48.155770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.504 [2024-12-09 10:19:48.155872] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:17.504 [2024-12-09 10:19:48.155910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:17.504 [2024-12-09 10:19:48.155927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:17.504 [2024-12-09 10:19:48.155939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:17.504 [2024-12-09 10:19:48.155967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:17.504 [2024-12-09 10:19:48.155980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:17.504 [2024-12-09 10:19:48.155992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:17.504 [2024-12-09 10:19:48.156005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:17.504 [2024-12-09 10:19:48.156018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:17.504 [2024-12-09 10:19:48.156030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:17.504 [2024-12-09 10:19:48.156043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:17.504 [2024-12-09 10:19:48.156055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:17.504 [2024-12-09 10:19:48.156068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:17.504 [2024-12-09 10:19:48.156080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:17.504 [2024-12-09 10:19:48.156091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:17.504 [2024-12-09 10:19:48.156103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:17.504 [2024-12-09 10:19:48.156115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:17.504 [2024-12-09 10:19:48.156127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:17.504 [2024-12-09 10:19:48.156139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:17.504 [2024-12-09 10:19:48.156151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:17.504 [2024-12-09 10:19:48.156164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:17.504 [2024-12-09 10:19:48.156175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:17.504 [2024-12-09 10:19:48.156187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:17.504 [2024-12-09 10:19:48.156199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:17.504 [2024-12-09 10:19:48.156211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:17.504 [2024-12-09 10:19:48.156222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:17.504 [2024-12-09 10:19:48.156234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:17.504 [2024-12-09 10:19:48.156246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:17.504 [2024-12-09 10:19:48.156258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:17.504 [2024-12-09 10:19:48.156270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:17.504 [2024-12-09 10:19:48.156282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:17.504 [2024-12-09 10:19:48.156294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:17.504 [2024-12-09 10:19:48.156306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:17.504 [2024-12-09 10:19:48.156317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:17.504 [2024-12-09 10:19:48.156329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:17.504 [2024-12-09 10:19:48.156342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:17.504 [2024-12-09 10:19:48.156354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:17.504 [2024-12-09 10:19:48.156366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:17.504 [2024-12-09 10:19:48.156378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:17.504 [2024-12-09 10:19:48.156390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:17.504 [2024-12-09 10:19:48.156402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:17.504 [2024-12-09 10:19:48.156414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:17.504 [2024-12-09 10:19:48.156427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:17.504 [2024-12-09 10:19:48.156438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:17.504 [2024-12-09 10:19:48.156450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:17.504 [2024-12-09 10:19:48.156462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:17.504 [2024-12-09 10:19:48.156473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:17.504 [2024-12-09 10:19:48.156485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:17.504 [2024-12-09 10:19:48.156497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:17.504 [2024-12-09 10:19:48.156509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:17.504 [2024-12-09 10:19:48.156521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:17.504 [2024-12-09 10:19:48.156534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:17.504 [2024-12-09 10:19:48.156545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:17.504 [2024-12-09 10:19:48.156559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:17.504 [2024-12-09 10:19:48.156571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:17.504 [2024-12-09 10:19:48.156583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:17.504 [2024-12-09 10:19:48.156595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:17.504 [2024-12-09 10:19:48.156607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:17.504 [2024-12-09 10:19:48.156619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:17.504 [2024-12-09 10:19:48.156631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:17.504 [2024-12-09 10:19:48.156643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:17.504 [2024-12-09 10:19:48.156656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:17.504 [2024-12-09 10:19:48.156668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:17.504 [2024-12-09 10:19:48.156680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:17.504 [2024-12-09 10:19:48.156692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:17.504 [2024-12-09 10:19:48.156704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:17.504 [2024-12-09 10:19:48.156716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:17.504 [2024-12-09 10:19:48.156729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:17.505 [2024-12-09 10:19:48.156741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:17.505 [2024-12-09 10:19:48.156753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:17.505 [2024-12-09 10:19:48.156766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:17.505 [2024-12-09 10:19:48.156777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:17.505 [2024-12-09 10:19:48.156789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:17.505 [2024-12-09 10:19:48.156801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:17.505 [2024-12-09 10:19:48.156813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:17.505 [2024-12-09 10:19:48.156825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:17.505 [2024-12-09 10:19:48.156837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:17.505 [2024-12-09 10:19:48.156850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:17.505 [2024-12-09 10:19:48.156875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:17.505 [2024-12-09 10:19:48.156889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:17.505 [2024-12-09 10:19:48.156902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:17.505 [2024-12-09 10:19:48.156914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:17.505 [2024-12-09 10:19:48.156926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:17.505 [2024-12-09 10:19:48.156938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:17.505 [2024-12-09 10:19:48.156950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:17.505 [2024-12-09 10:19:48.156962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:17.505 [2024-12-09 10:19:48.156973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:17.505 [2024-12-09 10:19:48.156985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:17.505 [2024-12-09 10:19:48.157003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:17.505 [2024-12-09 10:19:48.157015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:17.505 [2024-12-09 10:19:48.157028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:17.505 [2024-12-09 10:19:48.157039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:17.505 [2024-12-09 10:19:48.157052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:17.505 [2024-12-09 10:19:48.157065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:17.505 [2024-12-09 10:19:48.157094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:17.505 [2024-12-09 10:19:48.157107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:17.505 [2024-12-09 10:19:48.157119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:17.505 [2024-12-09 10:19:48.157131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:17.505 [2024-12-09 10:19:48.157143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:17.505 [2024-12-09 10:19:48.157156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:17.505 [2024-12-09 10:19:48.157168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:17.505 [2024-12-09 10:19:48.157190] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:17.505 [2024-12-09 10:19:48.157203] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1416d288-a52a-4793-951a-3821cfb97ba2 00:28:17.505 [2024-12-09 10:19:48.157215] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:17.505 [2024-12-09 10:19:48.157227] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:17.505 [2024-12-09 10:19:48.157239] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:17.505 [2024-12-09 10:19:48.157252] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:17.505 [2024-12-09 10:19:48.157263] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:17.505 [2024-12-09 10:19:48.157275] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:17.505 [2024-12-09 10:19:48.157294] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:17.505 [2024-12-09 10:19:48.157304] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:17.505 [2024-12-09 10:19:48.157314] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:17.505 [2024-12-09 10:19:48.157326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.505 [2024-12-09 10:19:48.157337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:17.505 [2024-12-09 10:19:48.157350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.457 ms 00:28:17.505 [2024-12-09 10:19:48.157361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.505 [2024-12-09 10:19:48.174978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.505 [2024-12-09 10:19:48.175026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:17.505 [2024-12-09 10:19:48.175044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.588 ms 00:28:17.505 [2024-12-09 10:19:48.175057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.505 [2024-12-09 10:19:48.175562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.505 [2024-12-09 10:19:48.175583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:17.505 [2024-12-09 10:19:48.175597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.443 ms 00:28:17.505 [2024-12-09 10:19:48.175607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.505 [2024-12-09 10:19:48.224047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:17.505 [2024-12-09 10:19:48.224127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:17.505 [2024-12-09 10:19:48.224163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:17.505 [2024-12-09 10:19:48.224181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.505 [2024-12-09 10:19:48.224387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:17.505 [2024-12-09 10:19:48.224405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:17.505 [2024-12-09 10:19:48.224418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:17.505 [2024-12-09 10:19:48.224429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.505 [2024-12-09 10:19:48.224497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:17.505 [2024-12-09 10:19:48.224515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:17.505 [2024-12-09 10:19:48.224527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:17.505 [2024-12-09 10:19:48.224539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.505 [2024-12-09 10:19:48.224570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:17.505 [2024-12-09 10:19:48.224584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:17.505 [2024-12-09 10:19:48.224596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:17.505 [2024-12-09 10:19:48.224606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.764 [2024-12-09 10:19:48.345151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:17.764 [2024-12-09 10:19:48.345242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:17.764 [2024-12-09 10:19:48.345264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:17.764 [2024-12-09 10:19:48.345288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.764 [2024-12-09 10:19:48.438458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:17.764 [2024-12-09 10:19:48.438592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:17.764 [2024-12-09 10:19:48.438613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:17.764 [2024-12-09 10:19:48.438626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.764 [2024-12-09 10:19:48.438727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:17.764 [2024-12-09 10:19:48.438746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:17.764 [2024-12-09 10:19:48.438758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:17.764 [2024-12-09 10:19:48.438769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.764 [2024-12-09 10:19:48.438808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:17.764 [2024-12-09 10:19:48.438838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:17.764 [2024-12-09 10:19:48.438909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:17.764 [2024-12-09 10:19:48.438921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.764 [2024-12-09 10:19:48.439069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:17.764 [2024-12-09 10:19:48.439089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:17.764 [2024-12-09 10:19:48.439103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:17.764 [2024-12-09 10:19:48.439115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.764 [2024-12-09 10:19:48.439171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:17.764 [2024-12-09 10:19:48.439190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:17.764 [2024-12-09 10:19:48.439210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:17.764 [2024-12-09 10:19:48.439221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.764 [2024-12-09 10:19:48.439275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:17.764 [2024-12-09 10:19:48.439292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:17.764 [2024-12-09 10:19:48.439319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:17.764 [2024-12-09 10:19:48.439332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.764 [2024-12-09 10:19:48.439390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:17.764 [2024-12-09 10:19:48.439413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:17.764 [2024-12-09 10:19:48.439440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:17.764 [2024-12-09 10:19:48.439452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.764 [2024-12-09 10:19:48.439663] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 477.635 ms, result 0 00:28:19.140 00:28:19.140 00:28:19.140 10:19:49 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=79358 00:28:19.140 10:19:49 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:28:19.140 10:19:49 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 79358 00:28:19.140 10:19:49 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 79358 ']' 00:28:19.140 10:19:49 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:19.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:19.140 10:19:49 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:19.141 10:19:49 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:19.141 10:19:49 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:19.141 10:19:49 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:28:19.141 [2024-12-09 10:19:49.761667] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:28:19.141 [2024-12-09 10:19:49.761902] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79358 ] 00:28:19.399 [2024-12-09 10:19:49.942475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:19.399 [2024-12-09 10:19:50.082550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:20.333 10:19:51 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:20.333 10:19:51 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:28:20.333 10:19:51 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:28:20.590 [2024-12-09 10:19:51.298180] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:20.590 [2024-12-09 10:19:51.298269] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:20.851 [2024-12-09 10:19:51.491372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.851 [2024-12-09 10:19:51.491470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:20.851 [2024-12-09 10:19:51.491512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:28:20.851 [2024-12-09 10:19:51.491525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.851 [2024-12-09 10:19:51.495677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.851 [2024-12-09 10:19:51.495722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:20.851 [2024-12-09 10:19:51.495769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.123 ms 00:28:20.851 [2024-12-09 10:19:51.495783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.851 [2024-12-09 10:19:51.495955] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:20.851 [2024-12-09 10:19:51.496928] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:20.851 [2024-12-09 10:19:51.496972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.851 [2024-12-09 10:19:51.496989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:20.851 [2024-12-09 10:19:51.497005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.032 ms 00:28:20.851 [2024-12-09 10:19:51.497016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.851 [2024-12-09 10:19:51.499141] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:20.851 [2024-12-09 10:19:51.516590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.851 [2024-12-09 10:19:51.516659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:20.851 [2024-12-09 10:19:51.516680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.455 ms 00:28:20.851 [2024-12-09 10:19:51.516700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.851 [2024-12-09 10:19:51.516878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.851 [2024-12-09 10:19:51.516910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:20.851 [2024-12-09 10:19:51.516928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:28:20.851 [2024-12-09 10:19:51.516947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.851 [2024-12-09 10:19:51.525800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.851 [2024-12-09 10:19:51.525893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:20.851 [2024-12-09 10:19:51.525914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.767 ms 00:28:20.851 [2024-12-09 10:19:51.525935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.851 [2024-12-09 10:19:51.526150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.851 [2024-12-09 10:19:51.526177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:20.851 [2024-12-09 10:19:51.526192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:28:20.851 [2024-12-09 10:19:51.526214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.851 [2024-12-09 10:19:51.526255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.851 [2024-12-09 10:19:51.526274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:20.851 [2024-12-09 10:19:51.526288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:28:20.851 [2024-12-09 10:19:51.526303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.851 [2024-12-09 10:19:51.526344] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:28:20.851 [2024-12-09 10:19:51.531940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.851 [2024-12-09 10:19:51.532132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:20.851 [2024-12-09 10:19:51.532264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.605 ms 00:28:20.851 [2024-12-09 10:19:51.532316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.851 [2024-12-09 10:19:51.532467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.851 [2024-12-09 10:19:51.532523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:20.851 [2024-12-09 10:19:51.532569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:28:20.851 [2024-12-09 10:19:51.532685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.851 [2024-12-09 10:19:51.532767] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:20.851 [2024-12-09 10:19:51.532846] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:20.851 [2024-12-09 10:19:51.533177] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:20.851 [2024-12-09 10:19:51.533264] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:20.851 [2024-12-09 10:19:51.533567] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:20.851 [2024-12-09 10:19:51.533597] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:20.851 [2024-12-09 10:19:51.533628] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:20.851 [2024-12-09 10:19:51.533652] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:20.851 [2024-12-09 10:19:51.533673] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:20.851 [2024-12-09 10:19:51.533695] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:28:20.851 [2024-12-09 10:19:51.533710] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:20.851 [2024-12-09 10:19:51.533728] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:20.851 [2024-12-09 10:19:51.533745] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:20.851 [2024-12-09 10:19:51.533758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.851 [2024-12-09 10:19:51.533773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:20.851 [2024-12-09 10:19:51.533787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.002 ms 00:28:20.851 [2024-12-09 10:19:51.533801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.851 [2024-12-09 10:19:51.533928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.851 [2024-12-09 10:19:51.533952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:20.851 [2024-12-09 10:19:51.533965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:28:20.851 [2024-12-09 10:19:51.533980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.851 [2024-12-09 10:19:51.534109] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:20.851 [2024-12-09 10:19:51.534135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:20.851 [2024-12-09 10:19:51.534149] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:20.851 [2024-12-09 10:19:51.534164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:20.851 [2024-12-09 10:19:51.534176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:20.851 [2024-12-09 10:19:51.534193] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:20.851 [2024-12-09 10:19:51.534205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:28:20.851 [2024-12-09 10:19:51.534222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:20.851 [2024-12-09 10:19:51.534233] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:28:20.851 [2024-12-09 10:19:51.534247] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:20.851 [2024-12-09 10:19:51.534258] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:20.851 [2024-12-09 10:19:51.534273] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:28:20.851 [2024-12-09 10:19:51.534284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:20.851 [2024-12-09 10:19:51.534297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:20.851 [2024-12-09 10:19:51.534309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:28:20.851 [2024-12-09 10:19:51.534323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:20.851 [2024-12-09 10:19:51.534334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:20.851 [2024-12-09 10:19:51.534348] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:28:20.851 [2024-12-09 10:19:51.534372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:20.851 [2024-12-09 10:19:51.534387] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:20.851 [2024-12-09 10:19:51.534398] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:28:20.851 [2024-12-09 10:19:51.534412] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:20.851 [2024-12-09 10:19:51.534423] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:20.851 [2024-12-09 10:19:51.534440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:28:20.851 [2024-12-09 10:19:51.534451] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:20.851 [2024-12-09 10:19:51.534465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:20.851 [2024-12-09 10:19:51.534480] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:28:20.851 [2024-12-09 10:19:51.534494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:20.851 [2024-12-09 10:19:51.534505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:20.851 [2024-12-09 10:19:51.534522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:28:20.851 [2024-12-09 10:19:51.534548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:20.851 [2024-12-09 10:19:51.534577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:20.851 [2024-12-09 10:19:51.534588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:28:20.851 [2024-12-09 10:19:51.534628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:20.851 [2024-12-09 10:19:51.534649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:20.851 [2024-12-09 10:19:51.534673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:28:20.851 [2024-12-09 10:19:51.534684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:20.851 [2024-12-09 10:19:51.534709] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:20.852 [2024-12-09 10:19:51.534723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:28:20.852 [2024-12-09 10:19:51.534746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:20.852 [2024-12-09 10:19:51.534759] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:20.852 [2024-12-09 10:19:51.534777] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:28:20.852 [2024-12-09 10:19:51.534789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:20.852 [2024-12-09 10:19:51.534806] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:20.852 [2024-12-09 10:19:51.534838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:20.852 [2024-12-09 10:19:51.534862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:20.852 [2024-12-09 10:19:51.534876] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:20.852 [2024-12-09 10:19:51.534894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:20.852 [2024-12-09 10:19:51.534907] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:20.852 [2024-12-09 10:19:51.534941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:20.852 [2024-12-09 10:19:51.534954] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:20.852 [2024-12-09 10:19:51.534971] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:20.852 [2024-12-09 10:19:51.534984] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:20.852 [2024-12-09 10:19:51.535004] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:20.852 [2024-12-09 10:19:51.535021] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:20.852 [2024-12-09 10:19:51.535047] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:28:20.852 [2024-12-09 10:19:51.535061] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:28:20.852 [2024-12-09 10:19:51.535080] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:28:20.852 [2024-12-09 10:19:51.535094] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:28:20.852 [2024-12-09 10:19:51.535112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:28:20.852 [2024-12-09 10:19:51.535125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:28:20.852 [2024-12-09 10:19:51.535143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:28:20.852 [2024-12-09 10:19:51.535156] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:28:20.852 [2024-12-09 10:19:51.535175] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:28:20.852 [2024-12-09 10:19:51.535188] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:28:20.852 [2024-12-09 10:19:51.535206] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:28:20.852 [2024-12-09 10:19:51.535219] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:28:20.852 [2024-12-09 10:19:51.535237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:28:20.852 [2024-12-09 10:19:51.535251] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:28:20.852 [2024-12-09 10:19:51.535269] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:20.852 [2024-12-09 10:19:51.535292] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:20.852 [2024-12-09 10:19:51.535323] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:20.852 [2024-12-09 10:19:51.535337] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:20.852 [2024-12-09 10:19:51.535355] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:20.852 [2024-12-09 10:19:51.535368] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:20.852 [2024-12-09 10:19:51.535388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.852 [2024-12-09 10:19:51.535402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:20.852 [2024-12-09 10:19:51.535421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.352 ms 00:28:20.852 [2024-12-09 10:19:51.535440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.852 [2024-12-09 10:19:51.578904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.852 [2024-12-09 10:19:51.578989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:20.852 [2024-12-09 10:19:51.579020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.352 ms 00:28:20.852 [2024-12-09 10:19:51.579042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.852 [2024-12-09 10:19:51.579286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.852 [2024-12-09 10:19:51.579321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:20.852 [2024-12-09 10:19:51.579341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:28:20.852 [2024-12-09 10:19:51.579354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.852 [2024-12-09 10:19:51.627852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.852 [2024-12-09 10:19:51.627982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:20.852 [2024-12-09 10:19:51.628008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.447 ms 00:28:20.852 [2024-12-09 10:19:51.628022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.852 [2024-12-09 10:19:51.628201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.852 [2024-12-09 10:19:51.628222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:20.852 [2024-12-09 10:19:51.628241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:20.852 [2024-12-09 10:19:51.628253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.852 [2024-12-09 10:19:51.628986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.852 [2024-12-09 10:19:51.629019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:20.852 [2024-12-09 10:19:51.629037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.668 ms 00:28:20.852 [2024-12-09 10:19:51.629050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.852 [2024-12-09 10:19:51.629245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.852 [2024-12-09 10:19:51.629278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:20.852 [2024-12-09 10:19:51.629308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.152 ms 00:28:20.852 [2024-12-09 10:19:51.629334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.114 [2024-12-09 10:19:51.652913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.114 [2024-12-09 10:19:51.652960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:21.114 [2024-12-09 10:19:51.652987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.540 ms 00:28:21.114 [2024-12-09 10:19:51.653002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.114 [2024-12-09 10:19:51.683395] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:21.114 [2024-12-09 10:19:51.683435] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:21.114 [2024-12-09 10:19:51.683473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.114 [2024-12-09 10:19:51.683486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:21.114 [2024-12-09 10:19:51.683501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.274 ms 00:28:21.114 [2024-12-09 10:19:51.683524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.114 [2024-12-09 10:19:51.712482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.114 [2024-12-09 10:19:51.712540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:21.114 [2024-12-09 10:19:51.712577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.865 ms 00:28:21.114 [2024-12-09 10:19:51.712590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.114 [2024-12-09 10:19:51.727505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.114 [2024-12-09 10:19:51.727728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:21.114 [2024-12-09 10:19:51.727764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.811 ms 00:28:21.114 [2024-12-09 10:19:51.727777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.114 [2024-12-09 10:19:51.743068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.114 [2024-12-09 10:19:51.743108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:21.114 [2024-12-09 10:19:51.743129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.163 ms 00:28:21.114 [2024-12-09 10:19:51.743141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.114 [2024-12-09 10:19:51.744113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.114 [2024-12-09 10:19:51.744150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:21.114 [2024-12-09 10:19:51.744174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.831 ms 00:28:21.114 [2024-12-09 10:19:51.744188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.114 [2024-12-09 10:19:51.830625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.114 [2024-12-09 10:19:51.830722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:21.114 [2024-12-09 10:19:51.830763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.395 ms 00:28:21.114 [2024-12-09 10:19:51.830775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.114 [2024-12-09 10:19:51.843556] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:28:21.114 [2024-12-09 10:19:51.869148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.114 [2024-12-09 10:19:51.869242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:21.114 [2024-12-09 10:19:51.869295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.146 ms 00:28:21.114 [2024-12-09 10:19:51.869339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.114 [2024-12-09 10:19:51.869578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.114 [2024-12-09 10:19:51.869601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:21.114 [2024-12-09 10:19:51.869615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:21.114 [2024-12-09 10:19:51.869629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.114 [2024-12-09 10:19:51.869699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.114 [2024-12-09 10:19:51.869718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:21.114 [2024-12-09 10:19:51.869749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:28:21.114 [2024-12-09 10:19:51.869763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.114 [2024-12-09 10:19:51.869796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.114 [2024-12-09 10:19:51.869812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:21.114 [2024-12-09 10:19:51.869823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:21.114 [2024-12-09 10:19:51.869837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.114 [2024-12-09 10:19:51.869887] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:21.114 [2024-12-09 10:19:51.869951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.114 [2024-12-09 10:19:51.869965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:21.114 [2024-12-09 10:19:51.870022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:28:21.114 [2024-12-09 10:19:51.870058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.114 [2024-12-09 10:19:51.902841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.114 [2024-12-09 10:19:51.902911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:21.114 [2024-12-09 10:19:51.902952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.721 ms 00:28:21.114 [2024-12-09 10:19:51.902965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.114 [2024-12-09 10:19:51.903120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.114 [2024-12-09 10:19:51.903140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:21.114 [2024-12-09 10:19:51.903162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:28:21.114 [2024-12-09 10:19:51.903175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.114 [2024-12-09 10:19:51.904479] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:21.114 [2024-12-09 10:19:51.908849] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 412.672 ms, result 0 00:28:21.386 [2024-12-09 10:19:51.910120] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:21.386 Some configs were skipped because the RPC state that can call them passed over. 00:28:21.386 10:19:51 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:28:21.645 [2024-12-09 10:19:52.252628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.645 [2024-12-09 10:19:52.253021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:28:21.645 [2024-12-09 10:19:52.253157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.697 ms 00:28:21.645 [2024-12-09 10:19:52.253311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.645 [2024-12-09 10:19:52.253421] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.509 ms, result 0 00:28:21.645 true 00:28:21.645 10:19:52 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:28:21.905 [2024-12-09 10:19:52.620776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.905 [2024-12-09 10:19:52.621111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:28:21.905 [2024-12-09 10:19:52.621306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.293 ms 00:28:21.905 [2024-12-09 10:19:52.621372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.905 [2024-12-09 10:19:52.621557] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.064 ms, result 0 00:28:21.905 true 00:28:21.905 10:19:52 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 79358 00:28:21.905 10:19:52 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 79358 ']' 00:28:21.905 10:19:52 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 79358 00:28:21.905 10:19:52 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:28:21.905 10:19:52 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:21.905 10:19:52 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79358 00:28:21.905 killing process with pid 79358 00:28:21.905 10:19:52 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:21.905 10:19:52 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:21.905 10:19:52 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79358' 00:28:21.905 10:19:52 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 79358 00:28:21.905 10:19:52 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 79358 00:28:23.282 [2024-12-09 10:19:53.785461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.282 [2024-12-09 10:19:53.785544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:23.282 [2024-12-09 10:19:53.785567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:23.282 [2024-12-09 10:19:53.785596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.282 [2024-12-09 10:19:53.785631] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:28:23.282 [2024-12-09 10:19:53.789309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.282 [2024-12-09 10:19:53.789345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:23.282 [2024-12-09 10:19:53.789367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.651 ms 00:28:23.282 [2024-12-09 10:19:53.789380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.282 [2024-12-09 10:19:53.789743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.282 [2024-12-09 10:19:53.789770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:23.282 [2024-12-09 10:19:53.789789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:28:23.282 [2024-12-09 10:19:53.789801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.282 [2024-12-09 10:19:53.793792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.282 [2024-12-09 10:19:53.793848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:23.282 [2024-12-09 10:19:53.793870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.961 ms 00:28:23.282 [2024-12-09 10:19:53.793883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.282 [2024-12-09 10:19:53.801262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.282 [2024-12-09 10:19:53.801304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:23.282 [2024-12-09 10:19:53.801326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.327 ms 00:28:23.282 [2024-12-09 10:19:53.801339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.282 [2024-12-09 10:19:53.814230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.282 [2024-12-09 10:19:53.814302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:23.282 [2024-12-09 10:19:53.814330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.788 ms 00:28:23.282 [2024-12-09 10:19:53.814343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.282 [2024-12-09 10:19:53.823647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.282 [2024-12-09 10:19:53.823717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:23.282 [2024-12-09 10:19:53.823741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.198 ms 00:28:23.282 [2024-12-09 10:19:53.823754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.282 [2024-12-09 10:19:53.823978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.282 [2024-12-09 10:19:53.824003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:23.282 [2024-12-09 10:19:53.824031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:28:23.282 [2024-12-09 10:19:53.824044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.282 [2024-12-09 10:19:53.837292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.282 [2024-12-09 10:19:53.837355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:23.282 [2024-12-09 10:19:53.837383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.203 ms 00:28:23.282 [2024-12-09 10:19:53.837398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.282 [2024-12-09 10:19:53.849814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.282 [2024-12-09 10:19:53.849880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:23.282 [2024-12-09 10:19:53.849918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.324 ms 00:28:23.282 [2024-12-09 10:19:53.849933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.282 [2024-12-09 10:19:53.861960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.282 [2024-12-09 10:19:53.862011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:23.282 [2024-12-09 10:19:53.862036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.930 ms 00:28:23.282 [2024-12-09 10:19:53.862050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.282 [2024-12-09 10:19:53.874084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.282 [2024-12-09 10:19:53.874143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:23.282 [2024-12-09 10:19:53.874169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.896 ms 00:28:23.282 [2024-12-09 10:19:53.874183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.282 [2024-12-09 10:19:53.874261] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:23.282 [2024-12-09 10:19:53.874292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:23.282 [2024-12-09 10:19:53.874326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:23.282 [2024-12-09 10:19:53.874341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:23.282 [2024-12-09 10:19:53.874360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:23.282 [2024-12-09 10:19:53.874375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:23.282 [2024-12-09 10:19:53.874409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:23.282 [2024-12-09 10:19:53.874423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:23.282 [2024-12-09 10:19:53.874446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:23.282 [2024-12-09 10:19:53.874461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:23.282 [2024-12-09 10:19:53.874480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:23.282 [2024-12-09 10:19:53.874494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:23.282 [2024-12-09 10:19:53.874513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:23.282 [2024-12-09 10:19:53.874527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:23.282 [2024-12-09 10:19:53.874549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:23.282 [2024-12-09 10:19:53.874563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:23.282 [2024-12-09 10:19:53.874582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:23.282 [2024-12-09 10:19:53.874597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:23.282 [2024-12-09 10:19:53.874615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:23.282 [2024-12-09 10:19:53.874629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:23.282 [2024-12-09 10:19:53.874649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:23.282 [2024-12-09 10:19:53.874662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:23.282 [2024-12-09 10:19:53.874687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:23.282 [2024-12-09 10:19:53.874701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:23.282 [2024-12-09 10:19:53.874720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:23.282 [2024-12-09 10:19:53.874734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.874754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.874769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.874789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.874804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.874823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.874860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.874882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.874896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.874916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.874936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.874955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.874969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.875002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.875015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.875036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.875050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.875070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.875084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.875103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.875117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.875135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.875149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.875169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.875183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.875202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.875215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.875234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.875248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.875273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.875291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.875311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.875325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.875343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.875358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.875377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.875391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.875410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.875423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.875442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.875456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.875475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.875500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.875522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.875536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.875561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.875575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.875594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.875609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.875628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.875642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.875661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.875675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.875694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.875708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.875727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.875741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.875760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.875774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.875792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.875806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.875839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.875856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.875875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.875889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.875908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.875922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.875942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.875956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.875977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.875991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.876011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.876026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.876045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.876059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.876078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:23.283 [2024-12-09 10:19:53.876121] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:23.283 [2024-12-09 10:19:53.876157] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1416d288-a52a-4793-951a-3821cfb97ba2 00:28:23.283 [2024-12-09 10:19:53.876171] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:23.283 [2024-12-09 10:19:53.876190] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:23.283 [2024-12-09 10:19:53.876207] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:23.283 [2024-12-09 10:19:53.876228] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:23.283 [2024-12-09 10:19:53.876241] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:23.283 [2024-12-09 10:19:53.876261] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:23.283 [2024-12-09 10:19:53.876274] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:23.283 [2024-12-09 10:19:53.876292] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:23.283 [2024-12-09 10:19:53.876304] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:23.283 [2024-12-09 10:19:53.876323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.283 [2024-12-09 10:19:53.876337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:23.283 [2024-12-09 10:19:53.876357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.067 ms 00:28:23.283 [2024-12-09 10:19:53.876377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.283 [2024-12-09 10:19:53.894461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.283 [2024-12-09 10:19:53.894629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:23.283 [2024-12-09 10:19:53.894759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.025 ms 00:28:23.283 [2024-12-09 10:19:53.894814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.283 [2024-12-09 10:19:53.895507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.283 [2024-12-09 10:19:53.895648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:23.284 [2024-12-09 10:19:53.895769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.445 ms 00:28:23.284 [2024-12-09 10:19:53.895823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.284 [2024-12-09 10:19:53.956233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.284 [2024-12-09 10:19:53.956576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:23.284 [2024-12-09 10:19:53.956736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.284 [2024-12-09 10:19:53.956793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.284 [2024-12-09 10:19:53.957168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.284 [2024-12-09 10:19:53.957311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:23.284 [2024-12-09 10:19:53.957349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.284 [2024-12-09 10:19:53.957369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.284 [2024-12-09 10:19:53.957478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.284 [2024-12-09 10:19:53.957499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:23.284 [2024-12-09 10:19:53.957525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.284 [2024-12-09 10:19:53.957549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.284 [2024-12-09 10:19:53.957590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.284 [2024-12-09 10:19:53.957606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:23.284 [2024-12-09 10:19:53.957633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.284 [2024-12-09 10:19:53.957646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.284 [2024-12-09 10:19:54.072070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.284 [2024-12-09 10:19:54.072399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:23.284 [2024-12-09 10:19:54.072456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.284 [2024-12-09 10:19:54.072472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.542 [2024-12-09 10:19:54.161151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.543 [2024-12-09 10:19:54.161245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:23.543 [2024-12-09 10:19:54.161283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.543 [2024-12-09 10:19:54.161298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.543 [2024-12-09 10:19:54.161438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.543 [2024-12-09 10:19:54.161459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:23.543 [2024-12-09 10:19:54.161487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.543 [2024-12-09 10:19:54.161502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.543 [2024-12-09 10:19:54.161551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.543 [2024-12-09 10:19:54.161568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:23.543 [2024-12-09 10:19:54.161587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.543 [2024-12-09 10:19:54.161601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.543 [2024-12-09 10:19:54.161766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.543 [2024-12-09 10:19:54.161787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:23.543 [2024-12-09 10:19:54.161807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.543 [2024-12-09 10:19:54.161821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.543 [2024-12-09 10:19:54.161922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.543 [2024-12-09 10:19:54.161942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:23.543 [2024-12-09 10:19:54.161962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.543 [2024-12-09 10:19:54.161977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.543 [2024-12-09 10:19:54.162050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.543 [2024-12-09 10:19:54.162067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:23.543 [2024-12-09 10:19:54.162101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.543 [2024-12-09 10:19:54.162123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.543 [2024-12-09 10:19:54.162194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.543 [2024-12-09 10:19:54.162212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:23.543 [2024-12-09 10:19:54.162231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.543 [2024-12-09 10:19:54.162246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.543 [2024-12-09 10:19:54.162472] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 376.965 ms, result 0 00:28:24.937 10:19:55 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:24.937 [2024-12-09 10:19:55.401368] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:28:24.937 [2024-12-09 10:19:55.401883] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79433 ] 00:28:24.937 [2024-12-09 10:19:55.581537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:24.937 [2024-12-09 10:19:55.724673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:25.503 [2024-12-09 10:19:56.113813] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:25.503 [2024-12-09 10:19:56.113928] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:25.503 [2024-12-09 10:19:56.280481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.503 [2024-12-09 10:19:56.280573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:25.503 [2024-12-09 10:19:56.280597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:25.503 [2024-12-09 10:19:56.280610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.503 [2024-12-09 10:19:56.284249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.503 [2024-12-09 10:19:56.284296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:25.503 [2024-12-09 10:19:56.284316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.608 ms 00:28:25.503 [2024-12-09 10:19:56.284328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.503 [2024-12-09 10:19:56.284537] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:25.503 [2024-12-09 10:19:56.285536] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:25.503 [2024-12-09 10:19:56.285722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.503 [2024-12-09 10:19:56.285743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:25.503 [2024-12-09 10:19:56.285757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.195 ms 00:28:25.503 [2024-12-09 10:19:56.285769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.503 [2024-12-09 10:19:56.288231] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:25.764 [2024-12-09 10:19:56.305552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.764 [2024-12-09 10:19:56.305615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:25.764 [2024-12-09 10:19:56.305639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.327 ms 00:28:25.764 [2024-12-09 10:19:56.305653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.764 [2024-12-09 10:19:56.305820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.764 [2024-12-09 10:19:56.305863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:25.764 [2024-12-09 10:19:56.305880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:28:25.764 [2024-12-09 10:19:56.305893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.764 [2024-12-09 10:19:56.316442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.764 [2024-12-09 10:19:56.316778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:25.764 [2024-12-09 10:19:56.316816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.478 ms 00:28:25.764 [2024-12-09 10:19:56.316849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.764 [2024-12-09 10:19:56.317074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.764 [2024-12-09 10:19:56.317097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:25.764 [2024-12-09 10:19:56.317112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:28:25.764 [2024-12-09 10:19:56.317125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.764 [2024-12-09 10:19:56.317174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.764 [2024-12-09 10:19:56.317190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:25.764 [2024-12-09 10:19:56.317204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:28:25.764 [2024-12-09 10:19:56.317216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.764 [2024-12-09 10:19:56.317252] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:28:25.764 [2024-12-09 10:19:56.322377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.764 [2024-12-09 10:19:56.322425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:25.764 [2024-12-09 10:19:56.322452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.136 ms 00:28:25.764 [2024-12-09 10:19:56.322464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.764 [2024-12-09 10:19:56.322547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.764 [2024-12-09 10:19:56.322566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:25.764 [2024-12-09 10:19:56.322580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:28:25.764 [2024-12-09 10:19:56.322591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.764 [2024-12-09 10:19:56.322631] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:25.764 [2024-12-09 10:19:56.322664] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:25.764 [2024-12-09 10:19:56.322709] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:25.764 [2024-12-09 10:19:56.322731] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:25.764 [2024-12-09 10:19:56.322870] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:25.764 [2024-12-09 10:19:56.322892] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:25.764 [2024-12-09 10:19:56.322907] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:25.764 [2024-12-09 10:19:56.322929] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:25.764 [2024-12-09 10:19:56.322944] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:25.764 [2024-12-09 10:19:56.322956] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:28:25.764 [2024-12-09 10:19:56.322968] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:25.764 [2024-12-09 10:19:56.322979] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:25.764 [2024-12-09 10:19:56.322991] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:25.764 [2024-12-09 10:19:56.323004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.764 [2024-12-09 10:19:56.323016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:25.764 [2024-12-09 10:19:56.323028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.377 ms 00:28:25.764 [2024-12-09 10:19:56.323039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.764 [2024-12-09 10:19:56.323142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.764 [2024-12-09 10:19:56.323165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:25.764 [2024-12-09 10:19:56.323178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:28:25.764 [2024-12-09 10:19:56.323190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.764 [2024-12-09 10:19:56.323313] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:25.764 [2024-12-09 10:19:56.323331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:25.764 [2024-12-09 10:19:56.323345] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:25.764 [2024-12-09 10:19:56.323357] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:25.764 [2024-12-09 10:19:56.323370] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:25.764 [2024-12-09 10:19:56.323382] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:25.764 [2024-12-09 10:19:56.323393] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:28:25.764 [2024-12-09 10:19:56.323406] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:25.764 [2024-12-09 10:19:56.323417] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:28:25.764 [2024-12-09 10:19:56.323428] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:25.764 [2024-12-09 10:19:56.323439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:25.764 [2024-12-09 10:19:56.323465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:28:25.764 [2024-12-09 10:19:56.323476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:25.764 [2024-12-09 10:19:56.323487] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:25.764 [2024-12-09 10:19:56.323498] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:28:25.764 [2024-12-09 10:19:56.323510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:25.764 [2024-12-09 10:19:56.323521] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:25.764 [2024-12-09 10:19:56.323533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:28:25.764 [2024-12-09 10:19:56.323544] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:25.764 [2024-12-09 10:19:56.323556] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:25.764 [2024-12-09 10:19:56.323567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:28:25.764 [2024-12-09 10:19:56.323578] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:25.764 [2024-12-09 10:19:56.323589] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:25.764 [2024-12-09 10:19:56.323600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:28:25.764 [2024-12-09 10:19:56.323611] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:25.764 [2024-12-09 10:19:56.323621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:25.764 [2024-12-09 10:19:56.323632] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:28:25.764 [2024-12-09 10:19:56.323642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:25.764 [2024-12-09 10:19:56.323653] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:25.764 [2024-12-09 10:19:56.323664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:28:25.764 [2024-12-09 10:19:56.323674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:25.764 [2024-12-09 10:19:56.323684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:25.764 [2024-12-09 10:19:56.323695] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:28:25.764 [2024-12-09 10:19:56.323706] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:25.764 [2024-12-09 10:19:56.323717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:25.764 [2024-12-09 10:19:56.323728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:28:25.764 [2024-12-09 10:19:56.323738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:25.764 [2024-12-09 10:19:56.323749] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:25.764 [2024-12-09 10:19:56.323760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:28:25.764 [2024-12-09 10:19:56.323771] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:25.764 [2024-12-09 10:19:56.323782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:25.764 [2024-12-09 10:19:56.323793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:28:25.764 [2024-12-09 10:19:56.323805] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:25.764 [2024-12-09 10:19:56.323816] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:25.764 [2024-12-09 10:19:56.324105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:25.764 [2024-12-09 10:19:56.324173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:25.765 [2024-12-09 10:19:56.324217] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:25.765 [2024-12-09 10:19:56.324333] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:25.765 [2024-12-09 10:19:56.324385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:25.765 [2024-12-09 10:19:56.324427] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:25.765 [2024-12-09 10:19:56.324558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:25.765 [2024-12-09 10:19:56.324663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:25.765 [2024-12-09 10:19:56.324714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:25.765 [2024-12-09 10:19:56.324756] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:25.765 [2024-12-09 10:19:56.324980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:25.765 [2024-12-09 10:19:56.325048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:28:25.765 [2024-12-09 10:19:56.325106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:28:25.765 [2024-12-09 10:19:56.325175] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:28:25.765 [2024-12-09 10:19:56.325233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:28:25.765 [2024-12-09 10:19:56.325291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:28:25.765 [2024-12-09 10:19:56.325425] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:28:25.765 [2024-12-09 10:19:56.325557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:28:25.765 [2024-12-09 10:19:56.325625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:28:25.765 [2024-12-09 10:19:56.325750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:28:25.765 [2024-12-09 10:19:56.325917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:28:25.765 [2024-12-09 10:19:56.325938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:28:25.765 [2024-12-09 10:19:56.325951] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:28:25.765 [2024-12-09 10:19:56.325962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:28:25.765 [2024-12-09 10:19:56.325975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:28:25.765 [2024-12-09 10:19:56.325987] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:25.765 [2024-12-09 10:19:56.326001] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:25.765 [2024-12-09 10:19:56.326014] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:25.765 [2024-12-09 10:19:56.326026] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:25.765 [2024-12-09 10:19:56.326038] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:25.765 [2024-12-09 10:19:56.326050] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:25.765 [2024-12-09 10:19:56.326064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.765 [2024-12-09 10:19:56.326085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:25.765 [2024-12-09 10:19:56.326113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.820 ms 00:28:25.765 [2024-12-09 10:19:56.326126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.765 [2024-12-09 10:19:56.366723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.765 [2024-12-09 10:19:56.366808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:25.765 [2024-12-09 10:19:56.366849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.495 ms 00:28:25.765 [2024-12-09 10:19:56.366866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.765 [2024-12-09 10:19:56.367105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.765 [2024-12-09 10:19:56.367127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:25.765 [2024-12-09 10:19:56.367141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:28:25.765 [2024-12-09 10:19:56.367152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.765 [2024-12-09 10:19:56.421880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.765 [2024-12-09 10:19:56.421964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:25.765 [2024-12-09 10:19:56.421994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.691 ms 00:28:25.765 [2024-12-09 10:19:56.422008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.765 [2024-12-09 10:19:56.422218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.765 [2024-12-09 10:19:56.422240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:25.765 [2024-12-09 10:19:56.422255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:25.765 [2024-12-09 10:19:56.422268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.765 [2024-12-09 10:19:56.422875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.765 [2024-12-09 10:19:56.422896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:25.765 [2024-12-09 10:19:56.422918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.572 ms 00:28:25.765 [2024-12-09 10:19:56.422930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.765 [2024-12-09 10:19:56.423119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.765 [2024-12-09 10:19:56.423139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:25.765 [2024-12-09 10:19:56.423151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.154 ms 00:28:25.765 [2024-12-09 10:19:56.423163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.765 [2024-12-09 10:19:56.443738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.765 [2024-12-09 10:19:56.444059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:25.765 [2024-12-09 10:19:56.444094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.539 ms 00:28:25.765 [2024-12-09 10:19:56.444108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.765 [2024-12-09 10:19:56.461055] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:25.765 [2024-12-09 10:19:56.461140] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:25.765 [2024-12-09 10:19:56.461164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.765 [2024-12-09 10:19:56.461178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:25.765 [2024-12-09 10:19:56.461196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.830 ms 00:28:25.765 [2024-12-09 10:19:56.461208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.766 [2024-12-09 10:19:56.492107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.766 [2024-12-09 10:19:56.492237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:25.766 [2024-12-09 10:19:56.492263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.737 ms 00:28:25.766 [2024-12-09 10:19:56.492277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.766 [2024-12-09 10:19:56.509289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.766 [2024-12-09 10:19:56.509369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:25.766 [2024-12-09 10:19:56.509392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.790 ms 00:28:25.766 [2024-12-09 10:19:56.509406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.766 [2024-12-09 10:19:56.525536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.766 [2024-12-09 10:19:56.525614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:25.766 [2024-12-09 10:19:56.525637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.972 ms 00:28:25.766 [2024-12-09 10:19:56.525649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.766 [2024-12-09 10:19:56.526727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.766 [2024-12-09 10:19:56.526766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:25.766 [2024-12-09 10:19:56.526784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.839 ms 00:28:25.766 [2024-12-09 10:19:56.526797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.024 [2024-12-09 10:19:56.608365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.024 [2024-12-09 10:19:56.608474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:26.024 [2024-12-09 10:19:56.608499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.512 ms 00:28:26.024 [2024-12-09 10:19:56.608513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.024 [2024-12-09 10:19:56.625066] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:28:26.024 [2024-12-09 10:19:56.647680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.024 [2024-12-09 10:19:56.647785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:26.024 [2024-12-09 10:19:56.647816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.908 ms 00:28:26.024 [2024-12-09 10:19:56.647862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.024 [2024-12-09 10:19:56.648067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.024 [2024-12-09 10:19:56.648088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:26.024 [2024-12-09 10:19:56.648103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:26.024 [2024-12-09 10:19:56.648114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.024 [2024-12-09 10:19:56.648202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.024 [2024-12-09 10:19:56.648219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:26.024 [2024-12-09 10:19:56.648233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:28:26.024 [2024-12-09 10:19:56.648252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.024 [2024-12-09 10:19:56.648303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.024 [2024-12-09 10:19:56.648334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:26.024 [2024-12-09 10:19:56.648347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:28:26.024 [2024-12-09 10:19:56.648358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.025 [2024-12-09 10:19:56.648409] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:26.025 [2024-12-09 10:19:56.648434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.025 [2024-12-09 10:19:56.648447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:26.025 [2024-12-09 10:19:56.648460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:28:26.025 [2024-12-09 10:19:56.648472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.025 [2024-12-09 10:19:56.680975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.025 [2024-12-09 10:19:56.681047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:26.025 [2024-12-09 10:19:56.681070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.472 ms 00:28:26.025 [2024-12-09 10:19:56.681083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.025 [2024-12-09 10:19:56.681236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.025 [2024-12-09 10:19:56.681257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:26.025 [2024-12-09 10:19:56.681273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:28:26.025 [2024-12-09 10:19:56.681286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.025 [2024-12-09 10:19:56.682735] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:26.025 [2024-12-09 10:19:56.687112] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 401.858 ms, result 0 00:28:26.025 [2024-12-09 10:19:56.687962] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:26.025 [2024-12-09 10:19:56.704417] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:27.401  [2024-12-09T10:19:58.765Z] Copying: 26/256 [MB] (26 MBps) [2024-12-09T10:20:00.176Z] Copying: 47/256 [MB] (21 MBps) [2024-12-09T10:20:01.111Z] Copying: 69/256 [MB] (21 MBps) [2024-12-09T10:20:02.046Z] Copying: 91/256 [MB] (21 MBps) [2024-12-09T10:20:02.983Z] Copying: 113/256 [MB] (22 MBps) [2024-12-09T10:20:03.919Z] Copying: 135/256 [MB] (21 MBps) [2024-12-09T10:20:04.854Z] Copying: 157/256 [MB] (21 MBps) [2024-12-09T10:20:05.789Z] Copying: 178/256 [MB] (21 MBps) [2024-12-09T10:20:06.820Z] Copying: 200/256 [MB] (22 MBps) [2024-12-09T10:20:07.769Z] Copying: 222/256 [MB] (21 MBps) [2024-12-09T10:20:08.336Z] Copying: 243/256 [MB] (21 MBps) [2024-12-09T10:20:08.906Z] Copying: 256/256 [MB] (average 22 MBps)[2024-12-09 10:20:08.627839] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:38.109 [2024-12-09 10:20:08.642235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.109 [2024-12-09 10:20:08.642415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:38.109 [2024-12-09 10:20:08.642579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:38.109 [2024-12-09 10:20:08.642604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.109 [2024-12-09 10:20:08.642648] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:28:38.109 [2024-12-09 10:20:08.647206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.109 [2024-12-09 10:20:08.647383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:38.109 [2024-12-09 10:20:08.647508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.533 ms 00:28:38.109 [2024-12-09 10:20:08.647559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.109 [2024-12-09 10:20:08.648011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.109 [2024-12-09 10:20:08.648152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:38.109 [2024-12-09 10:20:08.648272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:28:38.109 [2024-12-09 10:20:08.648383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.109 [2024-12-09 10:20:08.652983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.109 [2024-12-09 10:20:08.653156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:38.109 [2024-12-09 10:20:08.653402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.523 ms 00:28:38.109 [2024-12-09 10:20:08.653456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.109 [2024-12-09 10:20:08.661201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.109 [2024-12-09 10:20:08.661396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:38.109 [2024-12-09 10:20:08.661521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.630 ms 00:28:38.109 [2024-12-09 10:20:08.661624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.109 [2024-12-09 10:20:08.692841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.109 [2024-12-09 10:20:08.693020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:38.109 [2024-12-09 10:20:08.693157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.113 ms 00:28:38.109 [2024-12-09 10:20:08.693256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.109 [2024-12-09 10:20:08.710722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.109 [2024-12-09 10:20:08.710941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:38.109 [2024-12-09 10:20:08.711081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.365 ms 00:28:38.109 [2024-12-09 10:20:08.711209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.109 [2024-12-09 10:20:08.711450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.109 [2024-12-09 10:20:08.711599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:38.109 [2024-12-09 10:20:08.711767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:28:38.109 [2024-12-09 10:20:08.711820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.109 [2024-12-09 10:20:08.741173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.109 [2024-12-09 10:20:08.741329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:38.109 [2024-12-09 10:20:08.741466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.274 ms 00:28:38.109 [2024-12-09 10:20:08.741565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.109 [2024-12-09 10:20:08.772935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.109 [2024-12-09 10:20:08.773131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:38.109 [2024-12-09 10:20:08.773256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.230 ms 00:28:38.109 [2024-12-09 10:20:08.773280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.109 [2024-12-09 10:20:08.803524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.109 [2024-12-09 10:20:08.803573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:38.109 [2024-12-09 10:20:08.803592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.145 ms 00:28:38.109 [2024-12-09 10:20:08.803605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.109 [2024-12-09 10:20:08.834312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.109 [2024-12-09 10:20:08.834355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:38.109 [2024-12-09 10:20:08.834373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.591 ms 00:28:38.109 [2024-12-09 10:20:08.834400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.110 [2024-12-09 10:20:08.834537] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:38.110 [2024-12-09 10:20:08.834580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.834595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.834607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.834633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.834645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.834657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.834669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.834680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.834691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.834703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.834714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.834725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.834737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.834749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.834760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.834771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.834782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.834792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.834803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.834814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.834826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.834837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.834848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.834875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.834903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.834938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.834956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.834969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.834982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.834994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.835007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.835023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.835035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.835049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.835062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.835074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.835094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.835106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.835118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.835131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.835143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.835156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.835169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.835182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.835194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.835207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.835219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.835232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.835244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.835256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.835269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.835281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.835293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.835306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.835318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.835330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.835356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.835368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.835380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.835392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.835404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.835419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.835431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.835443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.835455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.835467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.835479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.835491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.835502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.835514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.835526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.835537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.835549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.835560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.835589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.835601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.835613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.835625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.835637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.835649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.835660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.835672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.835685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.835697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.835709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.835721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.835734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.835745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.835758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:38.110 [2024-12-09 10:20:08.835770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:38.111 [2024-12-09 10:20:08.835782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:38.111 [2024-12-09 10:20:08.835795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:38.111 [2024-12-09 10:20:08.835808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:38.111 [2024-12-09 10:20:08.835836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:38.111 [2024-12-09 10:20:08.835849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:38.111 [2024-12-09 10:20:08.835875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:38.111 [2024-12-09 10:20:08.835889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:38.111 [2024-12-09 10:20:08.835902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:38.111 [2024-12-09 10:20:08.835915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:38.111 [2024-12-09 10:20:08.835927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:38.111 [2024-12-09 10:20:08.835949] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:38.111 [2024-12-09 10:20:08.835961] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1416d288-a52a-4793-951a-3821cfb97ba2 00:28:38.111 [2024-12-09 10:20:08.835974] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:38.111 [2024-12-09 10:20:08.835985] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:38.111 [2024-12-09 10:20:08.835997] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:38.111 [2024-12-09 10:20:08.836009] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:38.111 [2024-12-09 10:20:08.836020] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:38.111 [2024-12-09 10:20:08.836032] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:38.111 [2024-12-09 10:20:08.836049] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:38.111 [2024-12-09 10:20:08.836060] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:38.111 [2024-12-09 10:20:08.836071] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:38.111 [2024-12-09 10:20:08.836083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.111 [2024-12-09 10:20:08.836095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:38.111 [2024-12-09 10:20:08.836107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.547 ms 00:28:38.111 [2024-12-09 10:20:08.836119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.111 [2024-12-09 10:20:08.853636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.111 [2024-12-09 10:20:08.853870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:38.111 [2024-12-09 10:20:08.853900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.489 ms 00:28:38.111 [2024-12-09 10:20:08.853914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.111 [2024-12-09 10:20:08.854445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.111 [2024-12-09 10:20:08.854501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:38.111 [2024-12-09 10:20:08.854533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.470 ms 00:28:38.111 [2024-12-09 10:20:08.854545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.111 [2024-12-09 10:20:08.904567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:38.369 [2024-12-09 10:20:08.904789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:38.369 [2024-12-09 10:20:08.904817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:38.369 [2024-12-09 10:20:08.904838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.369 [2024-12-09 10:20:08.904989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:38.369 [2024-12-09 10:20:08.905009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:38.369 [2024-12-09 10:20:08.905021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:38.369 [2024-12-09 10:20:08.905033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.369 [2024-12-09 10:20:08.905101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:38.369 [2024-12-09 10:20:08.905121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:38.369 [2024-12-09 10:20:08.905133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:38.369 [2024-12-09 10:20:08.905146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.369 [2024-12-09 10:20:08.905189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:38.369 [2024-12-09 10:20:08.905204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:38.369 [2024-12-09 10:20:08.905216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:38.369 [2024-12-09 10:20:08.905228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.369 [2024-12-09 10:20:09.023464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:38.369 [2024-12-09 10:20:09.023559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:38.369 [2024-12-09 10:20:09.023582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:38.369 [2024-12-09 10:20:09.023595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.369 [2024-12-09 10:20:09.113384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:38.369 [2024-12-09 10:20:09.113487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:38.369 [2024-12-09 10:20:09.113514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:38.369 [2024-12-09 10:20:09.113530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.369 [2024-12-09 10:20:09.113718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:38.369 [2024-12-09 10:20:09.113739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:38.369 [2024-12-09 10:20:09.113754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:38.369 [2024-12-09 10:20:09.113767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.369 [2024-12-09 10:20:09.113811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:38.369 [2024-12-09 10:20:09.113842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:38.369 [2024-12-09 10:20:09.113856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:38.369 [2024-12-09 10:20:09.113869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.370 [2024-12-09 10:20:09.114064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:38.370 [2024-12-09 10:20:09.114089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:38.370 [2024-12-09 10:20:09.114141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:38.370 [2024-12-09 10:20:09.114157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.370 [2024-12-09 10:20:09.114242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:38.370 [2024-12-09 10:20:09.114272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:38.370 [2024-12-09 10:20:09.114319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:38.370 [2024-12-09 10:20:09.114334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.370 [2024-12-09 10:20:09.114400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:38.370 [2024-12-09 10:20:09.114432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:38.370 [2024-12-09 10:20:09.114449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:38.370 [2024-12-09 10:20:09.114463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.370 [2024-12-09 10:20:09.114535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:38.370 [2024-12-09 10:20:09.114566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:38.370 [2024-12-09 10:20:09.114612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:38.370 [2024-12-09 10:20:09.114641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.370 [2024-12-09 10:20:09.114846] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 472.603 ms, result 0 00:28:39.746 00:28:39.746 00:28:39.746 10:20:10 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:40.314 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:28:40.314 10:20:10 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:28:40.314 10:20:10 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:28:40.314 10:20:10 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:40.314 10:20:10 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:40.314 10:20:10 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:28:40.314 10:20:10 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:28:40.314 10:20:11 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 79358 00:28:40.314 Process with pid 79358 is not found 00:28:40.314 10:20:11 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 79358 ']' 00:28:40.314 10:20:11 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 79358 00:28:40.314 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (79358) - No such process 00:28:40.314 10:20:11 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 79358 is not found' 00:28:40.314 ************************************ 00:28:40.314 END TEST ftl_trim 00:28:40.314 ************************************ 00:28:40.314 00:28:40.314 real 1m17.779s 00:28:40.314 user 1m46.280s 00:28:40.314 sys 0m8.528s 00:28:40.314 10:20:11 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:40.314 10:20:11 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:28:40.314 10:20:11 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:28:40.314 10:20:11 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:28:40.314 10:20:11 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:40.314 10:20:11 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:40.314 ************************************ 00:28:40.314 START TEST ftl_restore 00:28:40.314 ************************************ 00:28:40.314 10:20:11 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:28:40.573 * Looking for test storage... 00:28:40.573 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:28:40.573 10:20:11 ftl.ftl_restore -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:40.573 10:20:11 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lcov --version 00:28:40.573 10:20:11 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:40.573 10:20:11 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:40.573 10:20:11 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:40.573 10:20:11 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:40.573 10:20:11 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:40.573 10:20:11 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:28:40.573 10:20:11 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:28:40.573 10:20:11 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:28:40.573 10:20:11 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:28:40.573 10:20:11 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:28:40.573 10:20:11 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:28:40.573 10:20:11 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:28:40.573 10:20:11 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:40.573 10:20:11 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:28:40.573 10:20:11 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:28:40.573 10:20:11 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:40.573 10:20:11 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:40.573 10:20:11 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:28:40.573 10:20:11 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:28:40.573 10:20:11 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:40.573 10:20:11 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:28:40.573 10:20:11 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:28:40.573 10:20:11 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:28:40.573 10:20:11 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:28:40.573 10:20:11 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:40.573 10:20:11 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:28:40.573 10:20:11 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:28:40.573 10:20:11 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:40.573 10:20:11 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:40.573 10:20:11 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:28:40.573 10:20:11 ftl.ftl_restore -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:40.573 10:20:11 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:40.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:40.573 --rc genhtml_branch_coverage=1 00:28:40.573 --rc genhtml_function_coverage=1 00:28:40.573 --rc genhtml_legend=1 00:28:40.573 --rc geninfo_all_blocks=1 00:28:40.573 --rc geninfo_unexecuted_blocks=1 00:28:40.573 00:28:40.573 ' 00:28:40.573 10:20:11 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:40.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:40.573 --rc genhtml_branch_coverage=1 00:28:40.573 --rc genhtml_function_coverage=1 00:28:40.573 --rc genhtml_legend=1 00:28:40.573 --rc geninfo_all_blocks=1 00:28:40.573 --rc geninfo_unexecuted_blocks=1 00:28:40.573 00:28:40.573 ' 00:28:40.573 10:20:11 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:40.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:40.574 --rc genhtml_branch_coverage=1 00:28:40.574 --rc genhtml_function_coverage=1 00:28:40.574 --rc genhtml_legend=1 00:28:40.574 --rc geninfo_all_blocks=1 00:28:40.574 --rc geninfo_unexecuted_blocks=1 00:28:40.574 00:28:40.574 ' 00:28:40.574 10:20:11 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:40.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:40.574 --rc genhtml_branch_coverage=1 00:28:40.574 --rc genhtml_function_coverage=1 00:28:40.574 --rc genhtml_legend=1 00:28:40.574 --rc geninfo_all_blocks=1 00:28:40.574 --rc geninfo_unexecuted_blocks=1 00:28:40.574 00:28:40.574 ' 00:28:40.574 10:20:11 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:28:40.574 10:20:11 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:28:40.574 10:20:11 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:28:40.574 10:20:11 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:28:40.574 10:20:11 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:28:40.574 10:20:11 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:28:40.574 10:20:11 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:40.574 10:20:11 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:28:40.574 10:20:11 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:28:40.574 10:20:11 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:40.574 10:20:11 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:40.574 10:20:11 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:28:40.574 10:20:11 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:28:40.574 10:20:11 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:40.574 10:20:11 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:40.574 10:20:11 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:28:40.574 10:20:11 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:28:40.574 10:20:11 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:40.574 10:20:11 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:40.574 10:20:11 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:28:40.574 10:20:11 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:28:40.574 10:20:11 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:40.574 10:20:11 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:40.574 10:20:11 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:40.574 10:20:11 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:40.574 10:20:11 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:28:40.574 10:20:11 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:28:40.574 10:20:11 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:40.574 10:20:11 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:40.574 10:20:11 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:40.574 10:20:11 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:28:40.574 10:20:11 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.gzGr9OCmDt 00:28:40.574 10:20:11 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:40.574 10:20:11 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:28:40.574 10:20:11 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:28:40.574 10:20:11 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:40.574 10:20:11 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:28:40.574 10:20:11 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:28:40.574 10:20:11 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:28:40.574 10:20:11 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:28:40.574 10:20:11 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=79648 00:28:40.574 10:20:11 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:40.574 10:20:11 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 79648 00:28:40.574 10:20:11 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 79648 ']' 00:28:40.574 10:20:11 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:40.574 10:20:11 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:40.574 10:20:11 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:40.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:40.574 10:20:11 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:40.574 10:20:11 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:28:40.833 [2024-12-09 10:20:11.395569] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:28:40.833 [2024-12-09 10:20:11.395975] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79648 ] 00:28:40.833 [2024-12-09 10:20:11.583506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:41.092 [2024-12-09 10:20:11.756190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:42.037 10:20:12 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:42.037 10:20:12 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:28:42.037 10:20:12 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:28:42.037 10:20:12 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:28:42.037 10:20:12 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:28:42.037 10:20:12 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:28:42.037 10:20:12 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:28:42.037 10:20:12 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:28:42.337 10:20:13 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:28:42.337 10:20:13 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:28:42.337 10:20:13 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:28:42.337 10:20:13 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:28:42.337 10:20:13 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:42.337 10:20:13 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:28:42.337 10:20:13 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:28:42.337 10:20:13 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:28:42.904 10:20:13 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:42.904 { 00:28:42.904 "name": "nvme0n1", 00:28:42.904 "aliases": [ 00:28:42.904 "b1612eba-ea8b-40fb-9d7d-6e093dfaea98" 00:28:42.904 ], 00:28:42.904 "product_name": "NVMe disk", 00:28:42.904 "block_size": 4096, 00:28:42.904 "num_blocks": 1310720, 00:28:42.904 "uuid": "b1612eba-ea8b-40fb-9d7d-6e093dfaea98", 00:28:42.904 "numa_id": -1, 00:28:42.904 "assigned_rate_limits": { 00:28:42.904 "rw_ios_per_sec": 0, 00:28:42.904 "rw_mbytes_per_sec": 0, 00:28:42.904 "r_mbytes_per_sec": 0, 00:28:42.904 "w_mbytes_per_sec": 0 00:28:42.904 }, 00:28:42.904 "claimed": true, 00:28:42.904 "claim_type": "read_many_write_one", 00:28:42.904 "zoned": false, 00:28:42.904 "supported_io_types": { 00:28:42.904 "read": true, 00:28:42.904 "write": true, 00:28:42.904 "unmap": true, 00:28:42.904 "flush": true, 00:28:42.904 "reset": true, 00:28:42.904 "nvme_admin": true, 00:28:42.904 "nvme_io": true, 00:28:42.904 "nvme_io_md": false, 00:28:42.904 "write_zeroes": true, 00:28:42.904 "zcopy": false, 00:28:42.904 "get_zone_info": false, 00:28:42.904 "zone_management": false, 00:28:42.904 "zone_append": false, 00:28:42.904 "compare": true, 00:28:42.904 "compare_and_write": false, 00:28:42.904 "abort": true, 00:28:42.904 "seek_hole": false, 00:28:42.904 "seek_data": false, 00:28:42.904 "copy": true, 00:28:42.904 "nvme_iov_md": false 00:28:42.904 }, 00:28:42.904 "driver_specific": { 00:28:42.904 "nvme": [ 00:28:42.904 { 00:28:42.904 "pci_address": "0000:00:11.0", 00:28:42.904 "trid": { 00:28:42.904 "trtype": "PCIe", 00:28:42.904 "traddr": "0000:00:11.0" 00:28:42.904 }, 00:28:42.904 "ctrlr_data": { 00:28:42.904 "cntlid": 0, 00:28:42.904 "vendor_id": "0x1b36", 00:28:42.904 "model_number": "QEMU NVMe Ctrl", 00:28:42.904 "serial_number": "12341", 00:28:42.904 "firmware_revision": "8.0.0", 00:28:42.904 "subnqn": "nqn.2019-08.org.qemu:12341", 00:28:42.904 "oacs": { 00:28:42.904 "security": 0, 00:28:42.904 "format": 1, 00:28:42.904 "firmware": 0, 00:28:42.904 "ns_manage": 1 00:28:42.904 }, 00:28:42.904 "multi_ctrlr": false, 00:28:42.904 "ana_reporting": false 00:28:42.904 }, 00:28:42.904 "vs": { 00:28:42.904 "nvme_version": "1.4" 00:28:42.904 }, 00:28:42.904 "ns_data": { 00:28:42.904 "id": 1, 00:28:42.904 "can_share": false 00:28:42.904 } 00:28:42.904 } 00:28:42.904 ], 00:28:42.904 "mp_policy": "active_passive" 00:28:42.904 } 00:28:42.904 } 00:28:42.904 ]' 00:28:42.904 10:20:13 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:42.904 10:20:13 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:28:42.904 10:20:13 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:42.904 10:20:13 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:28:42.904 10:20:13 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:28:42.904 10:20:13 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:28:42.904 10:20:13 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:28:42.904 10:20:13 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:28:42.904 10:20:13 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:28:42.904 10:20:13 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:42.904 10:20:13 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:43.162 10:20:13 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=db7fa2c6-7017-4ef1-b032-86775a359c21 00:28:43.162 10:20:13 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:28:43.162 10:20:13 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u db7fa2c6-7017-4ef1-b032-86775a359c21 00:28:43.420 10:20:14 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:28:43.679 10:20:14 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=6279a32a-519c-42f5-9eb2-82118898d67f 00:28:43.679 10:20:14 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 6279a32a-519c-42f5-9eb2-82118898d67f 00:28:43.937 10:20:14 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=31d74cfe-c79e-45a2-bbc4-022773af1fd1 00:28:43.937 10:20:14 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:28:43.937 10:20:14 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 31d74cfe-c79e-45a2-bbc4-022773af1fd1 00:28:43.937 10:20:14 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:28:43.937 10:20:14 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:28:43.937 10:20:14 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=31d74cfe-c79e-45a2-bbc4-022773af1fd1 00:28:43.937 10:20:14 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:28:43.937 10:20:14 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 31d74cfe-c79e-45a2-bbc4-022773af1fd1 00:28:43.937 10:20:14 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=31d74cfe-c79e-45a2-bbc4-022773af1fd1 00:28:43.937 10:20:14 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:43.937 10:20:14 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:28:43.937 10:20:14 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:28:43.937 10:20:14 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 31d74cfe-c79e-45a2-bbc4-022773af1fd1 00:28:44.195 10:20:14 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:44.195 { 00:28:44.195 "name": "31d74cfe-c79e-45a2-bbc4-022773af1fd1", 00:28:44.195 "aliases": [ 00:28:44.195 "lvs/nvme0n1p0" 00:28:44.195 ], 00:28:44.195 "product_name": "Logical Volume", 00:28:44.195 "block_size": 4096, 00:28:44.195 "num_blocks": 26476544, 00:28:44.195 "uuid": "31d74cfe-c79e-45a2-bbc4-022773af1fd1", 00:28:44.195 "assigned_rate_limits": { 00:28:44.195 "rw_ios_per_sec": 0, 00:28:44.195 "rw_mbytes_per_sec": 0, 00:28:44.195 "r_mbytes_per_sec": 0, 00:28:44.195 "w_mbytes_per_sec": 0 00:28:44.195 }, 00:28:44.195 "claimed": false, 00:28:44.195 "zoned": false, 00:28:44.195 "supported_io_types": { 00:28:44.195 "read": true, 00:28:44.195 "write": true, 00:28:44.195 "unmap": true, 00:28:44.195 "flush": false, 00:28:44.195 "reset": true, 00:28:44.195 "nvme_admin": false, 00:28:44.195 "nvme_io": false, 00:28:44.195 "nvme_io_md": false, 00:28:44.195 "write_zeroes": true, 00:28:44.195 "zcopy": false, 00:28:44.195 "get_zone_info": false, 00:28:44.195 "zone_management": false, 00:28:44.195 "zone_append": false, 00:28:44.195 "compare": false, 00:28:44.195 "compare_and_write": false, 00:28:44.195 "abort": false, 00:28:44.195 "seek_hole": true, 00:28:44.195 "seek_data": true, 00:28:44.195 "copy": false, 00:28:44.195 "nvme_iov_md": false 00:28:44.195 }, 00:28:44.195 "driver_specific": { 00:28:44.195 "lvol": { 00:28:44.195 "lvol_store_uuid": "6279a32a-519c-42f5-9eb2-82118898d67f", 00:28:44.195 "base_bdev": "nvme0n1", 00:28:44.195 "thin_provision": true, 00:28:44.195 "num_allocated_clusters": 0, 00:28:44.195 "snapshot": false, 00:28:44.195 "clone": false, 00:28:44.195 "esnap_clone": false 00:28:44.195 } 00:28:44.195 } 00:28:44.195 } 00:28:44.195 ]' 00:28:44.195 10:20:14 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:44.454 10:20:15 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:28:44.454 10:20:15 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:44.454 10:20:15 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:28:44.454 10:20:15 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:28:44.454 10:20:15 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:28:44.454 10:20:15 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:28:44.454 10:20:15 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:28:44.454 10:20:15 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:28:44.711 10:20:15 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:28:44.712 10:20:15 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:28:44.712 10:20:15 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 31d74cfe-c79e-45a2-bbc4-022773af1fd1 00:28:44.712 10:20:15 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=31d74cfe-c79e-45a2-bbc4-022773af1fd1 00:28:44.712 10:20:15 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:44.712 10:20:15 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:28:44.712 10:20:15 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:28:44.712 10:20:15 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 31d74cfe-c79e-45a2-bbc4-022773af1fd1 00:28:44.970 10:20:15 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:44.970 { 00:28:44.970 "name": "31d74cfe-c79e-45a2-bbc4-022773af1fd1", 00:28:44.970 "aliases": [ 00:28:44.970 "lvs/nvme0n1p0" 00:28:44.970 ], 00:28:44.970 "product_name": "Logical Volume", 00:28:44.970 "block_size": 4096, 00:28:44.970 "num_blocks": 26476544, 00:28:44.970 "uuid": "31d74cfe-c79e-45a2-bbc4-022773af1fd1", 00:28:44.970 "assigned_rate_limits": { 00:28:44.970 "rw_ios_per_sec": 0, 00:28:44.970 "rw_mbytes_per_sec": 0, 00:28:44.970 "r_mbytes_per_sec": 0, 00:28:44.970 "w_mbytes_per_sec": 0 00:28:44.970 }, 00:28:44.970 "claimed": false, 00:28:44.970 "zoned": false, 00:28:44.970 "supported_io_types": { 00:28:44.970 "read": true, 00:28:44.970 "write": true, 00:28:44.970 "unmap": true, 00:28:44.970 "flush": false, 00:28:44.970 "reset": true, 00:28:44.970 "nvme_admin": false, 00:28:44.970 "nvme_io": false, 00:28:44.971 "nvme_io_md": false, 00:28:44.971 "write_zeroes": true, 00:28:44.971 "zcopy": false, 00:28:44.971 "get_zone_info": false, 00:28:44.971 "zone_management": false, 00:28:44.971 "zone_append": false, 00:28:44.971 "compare": false, 00:28:44.971 "compare_and_write": false, 00:28:44.971 "abort": false, 00:28:44.971 "seek_hole": true, 00:28:44.971 "seek_data": true, 00:28:44.971 "copy": false, 00:28:44.971 "nvme_iov_md": false 00:28:44.971 }, 00:28:44.971 "driver_specific": { 00:28:44.971 "lvol": { 00:28:44.971 "lvol_store_uuid": "6279a32a-519c-42f5-9eb2-82118898d67f", 00:28:44.971 "base_bdev": "nvme0n1", 00:28:44.971 "thin_provision": true, 00:28:44.971 "num_allocated_clusters": 0, 00:28:44.971 "snapshot": false, 00:28:44.971 "clone": false, 00:28:44.971 "esnap_clone": false 00:28:44.971 } 00:28:44.971 } 00:28:44.971 } 00:28:44.971 ]' 00:28:44.971 10:20:15 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:45.229 10:20:15 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:28:45.229 10:20:15 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:45.229 10:20:15 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:28:45.229 10:20:15 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:28:45.229 10:20:15 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:28:45.229 10:20:15 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:28:45.229 10:20:15 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:28:45.488 10:20:16 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:28:45.488 10:20:16 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 31d74cfe-c79e-45a2-bbc4-022773af1fd1 00:28:45.488 10:20:16 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=31d74cfe-c79e-45a2-bbc4-022773af1fd1 00:28:45.488 10:20:16 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:45.488 10:20:16 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:28:45.488 10:20:16 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:28:45.488 10:20:16 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 31d74cfe-c79e-45a2-bbc4-022773af1fd1 00:28:45.747 10:20:16 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:45.747 { 00:28:45.747 "name": "31d74cfe-c79e-45a2-bbc4-022773af1fd1", 00:28:45.747 "aliases": [ 00:28:45.747 "lvs/nvme0n1p0" 00:28:45.747 ], 00:28:45.747 "product_name": "Logical Volume", 00:28:45.747 "block_size": 4096, 00:28:45.747 "num_blocks": 26476544, 00:28:45.747 "uuid": "31d74cfe-c79e-45a2-bbc4-022773af1fd1", 00:28:45.747 "assigned_rate_limits": { 00:28:45.747 "rw_ios_per_sec": 0, 00:28:45.747 "rw_mbytes_per_sec": 0, 00:28:45.747 "r_mbytes_per_sec": 0, 00:28:45.747 "w_mbytes_per_sec": 0 00:28:45.747 }, 00:28:45.747 "claimed": false, 00:28:45.747 "zoned": false, 00:28:45.747 "supported_io_types": { 00:28:45.747 "read": true, 00:28:45.747 "write": true, 00:28:45.747 "unmap": true, 00:28:45.747 "flush": false, 00:28:45.747 "reset": true, 00:28:45.747 "nvme_admin": false, 00:28:45.747 "nvme_io": false, 00:28:45.747 "nvme_io_md": false, 00:28:45.747 "write_zeroes": true, 00:28:45.747 "zcopy": false, 00:28:45.747 "get_zone_info": false, 00:28:45.747 "zone_management": false, 00:28:45.747 "zone_append": false, 00:28:45.747 "compare": false, 00:28:45.747 "compare_and_write": false, 00:28:45.747 "abort": false, 00:28:45.747 "seek_hole": true, 00:28:45.747 "seek_data": true, 00:28:45.747 "copy": false, 00:28:45.747 "nvme_iov_md": false 00:28:45.747 }, 00:28:45.747 "driver_specific": { 00:28:45.747 "lvol": { 00:28:45.747 "lvol_store_uuid": "6279a32a-519c-42f5-9eb2-82118898d67f", 00:28:45.747 "base_bdev": "nvme0n1", 00:28:45.747 "thin_provision": true, 00:28:45.747 "num_allocated_clusters": 0, 00:28:45.747 "snapshot": false, 00:28:45.747 "clone": false, 00:28:45.747 "esnap_clone": false 00:28:45.747 } 00:28:45.747 } 00:28:45.747 } 00:28:45.747 ]' 00:28:45.747 10:20:16 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:45.747 10:20:16 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:28:45.747 10:20:16 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:45.747 10:20:16 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:28:45.747 10:20:16 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:28:45.747 10:20:16 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:28:45.747 10:20:16 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:28:45.747 10:20:16 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 31d74cfe-c79e-45a2-bbc4-022773af1fd1 --l2p_dram_limit 10' 00:28:45.747 10:20:16 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:28:45.747 10:20:16 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:28:45.747 10:20:16 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:28:45.747 10:20:16 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:28:45.747 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:28:45.747 10:20:16 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 31d74cfe-c79e-45a2-bbc4-022773af1fd1 --l2p_dram_limit 10 -c nvc0n1p0 00:28:46.007 [2024-12-09 10:20:16.754211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.007 [2024-12-09 10:20:16.754622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:46.007 [2024-12-09 10:20:16.754665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:46.007 [2024-12-09 10:20:16.754679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.007 [2024-12-09 10:20:16.754780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.007 [2024-12-09 10:20:16.754800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:46.007 [2024-12-09 10:20:16.754864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:28:46.007 [2024-12-09 10:20:16.754882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.007 [2024-12-09 10:20:16.754943] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:46.007 [2024-12-09 10:20:16.755984] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:46.007 [2024-12-09 10:20:16.756019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.007 [2024-12-09 10:20:16.756034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:46.007 [2024-12-09 10:20:16.756051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.091 ms 00:28:46.007 [2024-12-09 10:20:16.756063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.007 [2024-12-09 10:20:16.756309] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID c37a29d1-b9ea-48f7-b142-8cb3a1c0b0d7 00:28:46.007 [2024-12-09 10:20:16.758316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.007 [2024-12-09 10:20:16.758358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:28:46.007 [2024-12-09 10:20:16.758376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:28:46.007 [2024-12-09 10:20:16.758391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.007 [2024-12-09 10:20:16.769213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.007 [2024-12-09 10:20:16.769476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:46.007 [2024-12-09 10:20:16.769509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.684 ms 00:28:46.007 [2024-12-09 10:20:16.769541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.007 [2024-12-09 10:20:16.769684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.007 [2024-12-09 10:20:16.769709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:46.007 [2024-12-09 10:20:16.769723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:28:46.007 [2024-12-09 10:20:16.769758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.007 [2024-12-09 10:20:16.769867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.007 [2024-12-09 10:20:16.769926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:46.007 [2024-12-09 10:20:16.769944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:28:46.007 [2024-12-09 10:20:16.769960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.007 [2024-12-09 10:20:16.769998] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:46.007 [2024-12-09 10:20:16.775557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.007 [2024-12-09 10:20:16.775590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:46.007 [2024-12-09 10:20:16.775608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.566 ms 00:28:46.007 [2024-12-09 10:20:16.775620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.007 [2024-12-09 10:20:16.775671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.007 [2024-12-09 10:20:16.775703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:46.007 [2024-12-09 10:20:16.775718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:28:46.007 [2024-12-09 10:20:16.775729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.007 [2024-12-09 10:20:16.775779] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:28:46.007 [2024-12-09 10:20:16.776002] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:46.007 [2024-12-09 10:20:16.776030] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:46.007 [2024-12-09 10:20:16.776048] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:46.007 [2024-12-09 10:20:16.776066] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:46.007 [2024-12-09 10:20:16.776081] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:46.007 [2024-12-09 10:20:16.776097] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:46.007 [2024-12-09 10:20:16.776109] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:46.007 [2024-12-09 10:20:16.776129] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:46.007 [2024-12-09 10:20:16.776141] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:46.007 [2024-12-09 10:20:16.776156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.007 [2024-12-09 10:20:16.776181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:46.007 [2024-12-09 10:20:16.776197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.381 ms 00:28:46.007 [2024-12-09 10:20:16.776209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.007 [2024-12-09 10:20:16.776312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.007 [2024-12-09 10:20:16.776328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:46.007 [2024-12-09 10:20:16.776344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:28:46.007 [2024-12-09 10:20:16.776355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.007 [2024-12-09 10:20:16.776497] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:46.007 [2024-12-09 10:20:16.776523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:46.007 [2024-12-09 10:20:16.776540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:46.007 [2024-12-09 10:20:16.776553] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:46.007 [2024-12-09 10:20:16.776568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:46.007 [2024-12-09 10:20:16.776579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:46.007 [2024-12-09 10:20:16.776593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:46.007 [2024-12-09 10:20:16.776605] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:46.007 [2024-12-09 10:20:16.776623] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:46.007 [2024-12-09 10:20:16.776635] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:46.007 [2024-12-09 10:20:16.776649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:46.007 [2024-12-09 10:20:16.776660] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:46.007 [2024-12-09 10:20:16.776674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:46.007 [2024-12-09 10:20:16.776685] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:46.007 [2024-12-09 10:20:16.776700] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:46.007 [2024-12-09 10:20:16.776722] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:46.007 [2024-12-09 10:20:16.776738] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:46.007 [2024-12-09 10:20:16.776749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:46.007 [2024-12-09 10:20:16.776762] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:46.007 [2024-12-09 10:20:16.776774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:46.007 [2024-12-09 10:20:16.776787] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:46.007 [2024-12-09 10:20:16.776798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:46.007 [2024-12-09 10:20:16.776812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:46.007 [2024-12-09 10:20:16.776823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:46.007 [2024-12-09 10:20:16.776859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:46.007 [2024-12-09 10:20:16.776874] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:46.007 [2024-12-09 10:20:16.776889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:46.008 [2024-12-09 10:20:16.776900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:46.008 [2024-12-09 10:20:16.776914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:46.008 [2024-12-09 10:20:16.776926] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:46.008 [2024-12-09 10:20:16.776940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:46.008 [2024-12-09 10:20:16.776951] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:46.008 [2024-12-09 10:20:16.776968] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:46.008 [2024-12-09 10:20:16.776980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:46.008 [2024-12-09 10:20:16.776996] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:46.008 [2024-12-09 10:20:16.777008] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:46.008 [2024-12-09 10:20:16.777021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:46.008 [2024-12-09 10:20:16.777033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:46.008 [2024-12-09 10:20:16.777047] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:46.008 [2024-12-09 10:20:16.777060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:46.008 [2024-12-09 10:20:16.777075] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:46.008 [2024-12-09 10:20:16.777087] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:46.008 [2024-12-09 10:20:16.777100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:46.008 [2024-12-09 10:20:16.777111] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:46.008 [2024-12-09 10:20:16.777126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:46.008 [2024-12-09 10:20:16.777138] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:46.008 [2024-12-09 10:20:16.777153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:46.008 [2024-12-09 10:20:16.777165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:46.008 [2024-12-09 10:20:16.777182] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:46.008 [2024-12-09 10:20:16.777193] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:46.008 [2024-12-09 10:20:16.777208] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:46.008 [2024-12-09 10:20:16.777219] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:46.008 [2024-12-09 10:20:16.777233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:46.008 [2024-12-09 10:20:16.777246] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:46.008 [2024-12-09 10:20:16.777267] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:46.008 [2024-12-09 10:20:16.777287] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:46.008 [2024-12-09 10:20:16.777302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:46.008 [2024-12-09 10:20:16.777314] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:46.008 [2024-12-09 10:20:16.777329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:46.008 [2024-12-09 10:20:16.777341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:46.008 [2024-12-09 10:20:16.777357] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:46.008 [2024-12-09 10:20:16.777369] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:46.008 [2024-12-09 10:20:16.777383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:46.008 [2024-12-09 10:20:16.777395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:46.008 [2024-12-09 10:20:16.777412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:46.008 [2024-12-09 10:20:16.777425] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:46.008 [2024-12-09 10:20:16.777450] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:46.008 [2024-12-09 10:20:16.777468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:46.008 [2024-12-09 10:20:16.777484] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:46.008 [2024-12-09 10:20:16.777496] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:46.008 [2024-12-09 10:20:16.777512] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:46.008 [2024-12-09 10:20:16.777534] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:46.008 [2024-12-09 10:20:16.777550] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:46.008 [2024-12-09 10:20:16.777562] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:46.008 [2024-12-09 10:20:16.777577] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:46.008 [2024-12-09 10:20:16.777590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.008 [2024-12-09 10:20:16.777605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:46.008 [2024-12-09 10:20:16.777618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.168 ms 00:28:46.008 [2024-12-09 10:20:16.777633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.008 [2024-12-09 10:20:16.777693] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:28:46.008 [2024-12-09 10:20:16.777717] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:28:49.293 [2024-12-09 10:20:19.698449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.293 [2024-12-09 10:20:19.698573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:28:49.293 [2024-12-09 10:20:19.698596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2920.766 ms 00:28:49.293 [2024-12-09 10:20:19.698612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.293 [2024-12-09 10:20:19.739223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.293 [2024-12-09 10:20:19.739301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:49.293 [2024-12-09 10:20:19.739340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.302 ms 00:28:49.293 [2024-12-09 10:20:19.739355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.293 [2024-12-09 10:20:19.739547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.293 [2024-12-09 10:20:19.739571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:49.293 [2024-12-09 10:20:19.739586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:28:49.293 [2024-12-09 10:20:19.739607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.293 [2024-12-09 10:20:19.785786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.293 [2024-12-09 10:20:19.785905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:49.293 [2024-12-09 10:20:19.785943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.124 ms 00:28:49.293 [2024-12-09 10:20:19.785975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.293 [2024-12-09 10:20:19.786065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.293 [2024-12-09 10:20:19.786093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:49.293 [2024-12-09 10:20:19.786138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:49.293 [2024-12-09 10:20:19.786170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.293 [2024-12-09 10:20:19.787140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.293 [2024-12-09 10:20:19.787179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:49.293 [2024-12-09 10:20:19.787195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.843 ms 00:28:49.293 [2024-12-09 10:20:19.787209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.293 [2024-12-09 10:20:19.787398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.293 [2024-12-09 10:20:19.787415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:49.293 [2024-12-09 10:20:19.787430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.161 ms 00:28:49.293 [2024-12-09 10:20:19.787445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.293 [2024-12-09 10:20:19.810468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.293 [2024-12-09 10:20:19.810555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:49.293 [2024-12-09 10:20:19.810590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.997 ms 00:28:49.293 [2024-12-09 10:20:19.810621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.293 [2024-12-09 10:20:19.837816] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:49.293 [2024-12-09 10:20:19.843164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.293 [2024-12-09 10:20:19.843204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:49.293 [2024-12-09 10:20:19.843230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.412 ms 00:28:49.293 [2024-12-09 10:20:19.843243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.293 [2024-12-09 10:20:19.921822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.293 [2024-12-09 10:20:19.921948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:28:49.293 [2024-12-09 10:20:19.922008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.484 ms 00:28:49.293 [2024-12-09 10:20:19.922022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.293 [2024-12-09 10:20:19.922314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.294 [2024-12-09 10:20:19.922342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:49.294 [2024-12-09 10:20:19.922364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.184 ms 00:28:49.294 [2024-12-09 10:20:19.922377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.294 [2024-12-09 10:20:19.953097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.294 [2024-12-09 10:20:19.953143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:28:49.294 [2024-12-09 10:20:19.953167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.569 ms 00:28:49.294 [2024-12-09 10:20:19.953181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.294 [2024-12-09 10:20:19.984701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.294 [2024-12-09 10:20:19.984748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:28:49.294 [2024-12-09 10:20:19.984787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.459 ms 00:28:49.294 [2024-12-09 10:20:19.984799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.294 [2024-12-09 10:20:19.985819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.294 [2024-12-09 10:20:19.985868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:49.294 [2024-12-09 10:20:19.985890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.925 ms 00:28:49.294 [2024-12-09 10:20:19.985906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.294 [2024-12-09 10:20:20.079209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.294 [2024-12-09 10:20:20.079627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:28:49.294 [2024-12-09 10:20:20.079672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.223 ms 00:28:49.294 [2024-12-09 10:20:20.079687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.552 [2024-12-09 10:20:20.112877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.552 [2024-12-09 10:20:20.112973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:28:49.552 [2024-12-09 10:20:20.113001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.077 ms 00:28:49.552 [2024-12-09 10:20:20.113023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.552 [2024-12-09 10:20:20.144295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.553 [2024-12-09 10:20:20.144368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:28:49.553 [2024-12-09 10:20:20.144406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.201 ms 00:28:49.553 [2024-12-09 10:20:20.144417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.553 [2024-12-09 10:20:20.174875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.553 [2024-12-09 10:20:20.175120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:49.553 [2024-12-09 10:20:20.175160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.407 ms 00:28:49.553 [2024-12-09 10:20:20.175175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.553 [2024-12-09 10:20:20.175239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.553 [2024-12-09 10:20:20.175259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:49.553 [2024-12-09 10:20:20.175281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:49.553 [2024-12-09 10:20:20.175294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.553 [2024-12-09 10:20:20.175469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.553 [2024-12-09 10:20:20.175491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:49.553 [2024-12-09 10:20:20.175507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:28:49.553 [2024-12-09 10:20:20.175519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.553 [2024-12-09 10:20:20.177198] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3422.373 ms, result 0 00:28:49.553 { 00:28:49.553 "name": "ftl0", 00:28:49.553 "uuid": "c37a29d1-b9ea-48f7-b142-8cb3a1c0b0d7" 00:28:49.553 } 00:28:49.553 10:20:20 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:28:49.553 10:20:20 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:28:49.811 10:20:20 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:28:49.811 10:20:20 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:28:50.070 [2024-12-09 10:20:20.744229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.070 [2024-12-09 10:20:20.744561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:50.070 [2024-12-09 10:20:20.744594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:50.070 [2024-12-09 10:20:20.744611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.070 [2024-12-09 10:20:20.744655] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:50.070 [2024-12-09 10:20:20.748505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.070 [2024-12-09 10:20:20.748537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:50.070 [2024-12-09 10:20:20.748571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.819 ms 00:28:50.070 [2024-12-09 10:20:20.748582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.070 [2024-12-09 10:20:20.748886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.070 [2024-12-09 10:20:20.748909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:50.070 [2024-12-09 10:20:20.748924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:28:50.070 [2024-12-09 10:20:20.748952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.070 [2024-12-09 10:20:20.752019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.070 [2024-12-09 10:20:20.752049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:50.070 [2024-12-09 10:20:20.752067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.041 ms 00:28:50.070 [2024-12-09 10:20:20.752079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.070 [2024-12-09 10:20:20.758438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.070 [2024-12-09 10:20:20.758483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:50.070 [2024-12-09 10:20:20.758518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.331 ms 00:28:50.070 [2024-12-09 10:20:20.758530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.070 [2024-12-09 10:20:20.791406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.070 [2024-12-09 10:20:20.791450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:50.070 [2024-12-09 10:20:20.791503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.802 ms 00:28:50.070 [2024-12-09 10:20:20.791514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.070 [2024-12-09 10:20:20.811214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.070 [2024-12-09 10:20:20.811497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:50.070 [2024-12-09 10:20:20.811532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.643 ms 00:28:50.070 [2024-12-09 10:20:20.811546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.070 [2024-12-09 10:20:20.811775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.070 [2024-12-09 10:20:20.811811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:50.070 [2024-12-09 10:20:20.811827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.159 ms 00:28:50.070 [2024-12-09 10:20:20.811839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.070 [2024-12-09 10:20:20.844178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.070 [2024-12-09 10:20:20.844221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:50.070 [2024-12-09 10:20:20.844244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.255 ms 00:28:50.070 [2024-12-09 10:20:20.844257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.330 [2024-12-09 10:20:20.874832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.330 [2024-12-09 10:20:20.874894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:50.330 [2024-12-09 10:20:20.874946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.479 ms 00:28:50.330 [2024-12-09 10:20:20.874974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.330 [2024-12-09 10:20:20.904429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.330 [2024-12-09 10:20:20.904467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:50.330 [2024-12-09 10:20:20.904517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.387 ms 00:28:50.330 [2024-12-09 10:20:20.904528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.330 [2024-12-09 10:20:20.936404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.330 [2024-12-09 10:20:20.936444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:50.330 [2024-12-09 10:20:20.936480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.764 ms 00:28:50.330 [2024-12-09 10:20:20.936492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.330 [2024-12-09 10:20:20.936558] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:50.330 [2024-12-09 10:20:20.936582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:50.330 [2024-12-09 10:20:20.936603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:50.330 [2024-12-09 10:20:20.936615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:50.330 [2024-12-09 10:20:20.936645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:50.330 [2024-12-09 10:20:20.936657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:50.330 [2024-12-09 10:20:20.936672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:50.330 [2024-12-09 10:20:20.936683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:50.330 [2024-12-09 10:20:20.936700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:50.330 [2024-12-09 10:20:20.936713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:50.330 [2024-12-09 10:20:20.936727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:50.330 [2024-12-09 10:20:20.936754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:50.330 [2024-12-09 10:20:20.936768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:50.330 [2024-12-09 10:20:20.936780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:50.330 [2024-12-09 10:20:20.936796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:50.330 [2024-12-09 10:20:20.936808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:50.330 [2024-12-09 10:20:20.936822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:50.330 [2024-12-09 10:20:20.936835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:50.330 [2024-12-09 10:20:20.936851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:50.330 [2024-12-09 10:20:20.936877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:50.330 [2024-12-09 10:20:20.936894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:50.330 [2024-12-09 10:20:20.936906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:50.330 [2024-12-09 10:20:20.936936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:50.330 [2024-12-09 10:20:20.936964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:50.330 [2024-12-09 10:20:20.937152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:50.330 [2024-12-09 10:20:20.937168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:50.330 [2024-12-09 10:20:20.937184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:50.330 [2024-12-09 10:20:20.937198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:50.330 [2024-12-09 10:20:20.937213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:50.330 [2024-12-09 10:20:20.937226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:50.330 [2024-12-09 10:20:20.937243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:50.330 [2024-12-09 10:20:20.937255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:50.330 [2024-12-09 10:20:20.937271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:50.330 [2024-12-09 10:20:20.937299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:50.330 [2024-12-09 10:20:20.937331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:50.330 [2024-12-09 10:20:20.937358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:50.330 [2024-12-09 10:20:20.937372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:50.330 [2024-12-09 10:20:20.937384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:50.330 [2024-12-09 10:20:20.937398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:50.330 [2024-12-09 10:20:20.937409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:50.330 [2024-12-09 10:20:20.937425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:50.330 [2024-12-09 10:20:20.937437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:50.330 [2024-12-09 10:20:20.937451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:50.330 [2024-12-09 10:20:20.937463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:50.330 [2024-12-09 10:20:20.937478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:50.330 [2024-12-09 10:20:20.937490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:50.330 [2024-12-09 10:20:20.937504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:50.330 [2024-12-09 10:20:20.937515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:50.330 [2024-12-09 10:20:20.937529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:50.330 [2024-12-09 10:20:20.937540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:50.330 [2024-12-09 10:20:20.937554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:50.330 [2024-12-09 10:20:20.937566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:50.330 [2024-12-09 10:20:20.937579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:50.330 [2024-12-09 10:20:20.937591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:50.330 [2024-12-09 10:20:20.937605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:50.330 [2024-12-09 10:20:20.937616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:50.330 [2024-12-09 10:20:20.937647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:50.330 [2024-12-09 10:20:20.937659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:50.330 [2024-12-09 10:20:20.937673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:50.330 [2024-12-09 10:20:20.937684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:50.330 [2024-12-09 10:20:20.937698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:50.330 [2024-12-09 10:20:20.937710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:50.330 [2024-12-09 10:20:20.937724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:50.330 [2024-12-09 10:20:20.937735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:50.330 [2024-12-09 10:20:20.937748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:50.330 [2024-12-09 10:20:20.937760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:50.330 [2024-12-09 10:20:20.937775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:50.330 [2024-12-09 10:20:20.937786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:50.330 [2024-12-09 10:20:20.937800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:50.330 [2024-12-09 10:20:20.937811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:50.330 [2024-12-09 10:20:20.937827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:50.331 [2024-12-09 10:20:20.937838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:50.331 [2024-12-09 10:20:20.937855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:50.331 [2024-12-09 10:20:20.937866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:50.331 [2024-12-09 10:20:20.937880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:50.331 [2024-12-09 10:20:20.937891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:50.331 [2024-12-09 10:20:20.938302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:50.331 [2024-12-09 10:20:20.938372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:50.331 [2024-12-09 10:20:20.938527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:50.331 [2024-12-09 10:20:20.938598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:50.331 [2024-12-09 10:20:20.938661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:50.331 [2024-12-09 10:20:20.938851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:50.331 [2024-12-09 10:20:20.938930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:50.331 [2024-12-09 10:20:20.939089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:50.331 [2024-12-09 10:20:20.939230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:50.331 [2024-12-09 10:20:20.939296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:50.331 [2024-12-09 10:20:20.939496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:50.331 [2024-12-09 10:20:20.939563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:50.331 [2024-12-09 10:20:20.939627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:50.331 [2024-12-09 10:20:20.939798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:50.331 [2024-12-09 10:20:20.939890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:50.331 [2024-12-09 10:20:20.939982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:50.331 [2024-12-09 10:20:20.940001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:50.331 [2024-12-09 10:20:20.940015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:50.331 [2024-12-09 10:20:20.940031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:50.331 [2024-12-09 10:20:20.940043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:50.331 [2024-12-09 10:20:20.940062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:50.331 [2024-12-09 10:20:20.940083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:50.331 [2024-12-09 10:20:20.940100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:50.331 [2024-12-09 10:20:20.940113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:50.331 [2024-12-09 10:20:20.940128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:50.331 [2024-12-09 10:20:20.940150] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:50.331 [2024-12-09 10:20:20.940166] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c37a29d1-b9ea-48f7-b142-8cb3a1c0b0d7 00:28:50.331 [2024-12-09 10:20:20.940179] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:50.331 [2024-12-09 10:20:20.940196] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:50.331 [2024-12-09 10:20:20.940221] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:50.331 [2024-12-09 10:20:20.940235] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:50.331 [2024-12-09 10:20:20.940247] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:50.331 [2024-12-09 10:20:20.940262] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:50.331 [2024-12-09 10:20:20.940274] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:50.331 [2024-12-09 10:20:20.940288] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:50.331 [2024-12-09 10:20:20.940298] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:50.331 [2024-12-09 10:20:20.940314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.331 [2024-12-09 10:20:20.940327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:50.331 [2024-12-09 10:20:20.940372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.758 ms 00:28:50.331 [2024-12-09 10:20:20.940387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.331 [2024-12-09 10:20:20.958456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.331 [2024-12-09 10:20:20.958527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:50.331 [2024-12-09 10:20:20.958579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.982 ms 00:28:50.331 [2024-12-09 10:20:20.958606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.331 [2024-12-09 10:20:20.959139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.331 [2024-12-09 10:20:20.959216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:50.331 [2024-12-09 10:20:20.959247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.496 ms 00:28:50.331 [2024-12-09 10:20:20.959261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.331 [2024-12-09 10:20:21.016658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:50.331 [2024-12-09 10:20:21.016713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:50.331 [2024-12-09 10:20:21.016751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:50.331 [2024-12-09 10:20:21.016763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.331 [2024-12-09 10:20:21.016878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:50.331 [2024-12-09 10:20:21.016897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:50.331 [2024-12-09 10:20:21.016917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:50.331 [2024-12-09 10:20:21.016929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.331 [2024-12-09 10:20:21.017081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:50.331 [2024-12-09 10:20:21.017102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:50.331 [2024-12-09 10:20:21.017118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:50.331 [2024-12-09 10:20:21.017130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.331 [2024-12-09 10:20:21.017165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:50.331 [2024-12-09 10:20:21.017196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:50.331 [2024-12-09 10:20:21.017212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:50.331 [2024-12-09 10:20:21.017228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.591 [2024-12-09 10:20:21.133071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:50.591 [2024-12-09 10:20:21.133167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:50.591 [2024-12-09 10:20:21.133195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:50.591 [2024-12-09 10:20:21.133208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.591 [2024-12-09 10:20:21.216966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:50.591 [2024-12-09 10:20:21.217045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:50.591 [2024-12-09 10:20:21.217072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:50.591 [2024-12-09 10:20:21.217089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.591 [2024-12-09 10:20:21.217253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:50.591 [2024-12-09 10:20:21.217289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:50.591 [2024-12-09 10:20:21.217336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:50.591 [2024-12-09 10:20:21.217363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.591 [2024-12-09 10:20:21.217436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:50.591 [2024-12-09 10:20:21.217453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:50.591 [2024-12-09 10:20:21.217468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:50.591 [2024-12-09 10:20:21.217479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.591 [2024-12-09 10:20:21.217612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:50.591 [2024-12-09 10:20:21.217629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:50.591 [2024-12-09 10:20:21.217643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:50.591 [2024-12-09 10:20:21.217655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.591 [2024-12-09 10:20:21.217716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:50.591 [2024-12-09 10:20:21.217733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:50.591 [2024-12-09 10:20:21.217747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:50.591 [2024-12-09 10:20:21.217757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.591 [2024-12-09 10:20:21.217812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:50.591 [2024-12-09 10:20:21.217827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:50.591 [2024-12-09 10:20:21.217841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:50.591 [2024-12-09 10:20:21.217852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.591 [2024-12-09 10:20:21.217984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:50.591 [2024-12-09 10:20:21.218005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:50.591 [2024-12-09 10:20:21.218029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:50.591 [2024-12-09 10:20:21.218042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.591 [2024-12-09 10:20:21.218245] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 473.987 ms, result 0 00:28:50.591 true 00:28:50.591 10:20:21 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 79648 00:28:50.591 10:20:21 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79648 ']' 00:28:50.591 10:20:21 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79648 00:28:50.591 10:20:21 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:28:50.591 10:20:21 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:50.591 10:20:21 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79648 00:28:50.591 killing process with pid 79648 00:28:50.591 10:20:21 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:50.591 10:20:21 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:50.591 10:20:21 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79648' 00:28:50.591 10:20:21 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 79648 00:28:50.591 10:20:21 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 79648 00:28:55.866 10:20:26 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:29:01.135 262144+0 records in 00:29:01.135 262144+0 records out 00:29:01.135 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.73694 s, 227 MB/s 00:29:01.135 10:20:31 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:29:02.509 10:20:33 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:02.767 [2024-12-09 10:20:33.341089] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:29:02.767 [2024-12-09 10:20:33.341286] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79904 ] 00:29:02.767 [2024-12-09 10:20:33.536737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:03.024 [2024-12-09 10:20:33.702559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:03.591 [2024-12-09 10:20:34.117288] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:03.591 [2024-12-09 10:20:34.117698] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:03.591 [2024-12-09 10:20:34.293840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.591 [2024-12-09 10:20:34.293902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:03.591 [2024-12-09 10:20:34.293938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:03.591 [2024-12-09 10:20:34.293950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.591 [2024-12-09 10:20:34.294047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.591 [2024-12-09 10:20:34.294076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:03.591 [2024-12-09 10:20:34.294090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:29:03.591 [2024-12-09 10:20:34.294113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.591 [2024-12-09 10:20:34.294150] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:03.591 [2024-12-09 10:20:34.295096] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:03.591 [2024-12-09 10:20:34.295130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.591 [2024-12-09 10:20:34.295144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:03.591 [2024-12-09 10:20:34.295157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.987 ms 00:29:03.591 [2024-12-09 10:20:34.295169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.591 [2024-12-09 10:20:34.297453] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:03.591 [2024-12-09 10:20:34.317433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.591 [2024-12-09 10:20:34.317474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:03.591 [2024-12-09 10:20:34.317521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.981 ms 00:29:03.591 [2024-12-09 10:20:34.317533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.591 [2024-12-09 10:20:34.317611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.591 [2024-12-09 10:20:34.317631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:03.591 [2024-12-09 10:20:34.317643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:29:03.591 [2024-12-09 10:20:34.317653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.591 [2024-12-09 10:20:34.329621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.591 [2024-12-09 10:20:34.329670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:03.591 [2024-12-09 10:20:34.329704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.878 ms 00:29:03.591 [2024-12-09 10:20:34.329722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.591 [2024-12-09 10:20:34.329823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.591 [2024-12-09 10:20:34.329858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:03.591 [2024-12-09 10:20:34.329900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:29:03.591 [2024-12-09 10:20:34.329911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.591 [2024-12-09 10:20:34.330010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.591 [2024-12-09 10:20:34.330053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:03.591 [2024-12-09 10:20:34.330072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:29:03.591 [2024-12-09 10:20:34.330084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.591 [2024-12-09 10:20:34.330145] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:03.591 [2024-12-09 10:20:34.335694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.591 [2024-12-09 10:20:34.335730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:03.591 [2024-12-09 10:20:34.335766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.561 ms 00:29:03.591 [2024-12-09 10:20:34.335776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.591 [2024-12-09 10:20:34.335823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.591 [2024-12-09 10:20:34.335857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:03.591 [2024-12-09 10:20:34.335889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:29:03.591 [2024-12-09 10:20:34.335901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.591 [2024-12-09 10:20:34.335947] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:03.591 [2024-12-09 10:20:34.335982] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:03.591 [2024-12-09 10:20:34.336041] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:03.591 [2024-12-09 10:20:34.336079] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:03.591 [2024-12-09 10:20:34.336192] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:03.591 [2024-12-09 10:20:34.336208] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:03.591 [2024-12-09 10:20:34.336223] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:03.591 [2024-12-09 10:20:34.336239] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:03.591 [2024-12-09 10:20:34.336253] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:03.591 [2024-12-09 10:20:34.336266] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:03.591 [2024-12-09 10:20:34.336279] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:03.591 [2024-12-09 10:20:34.336296] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:03.591 [2024-12-09 10:20:34.336308] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:03.591 [2024-12-09 10:20:34.336321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.591 [2024-12-09 10:20:34.336333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:03.591 [2024-12-09 10:20:34.336346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.378 ms 00:29:03.591 [2024-12-09 10:20:34.336373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.591 [2024-12-09 10:20:34.336505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.591 [2024-12-09 10:20:34.336521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:03.591 [2024-12-09 10:20:34.336532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:29:03.591 [2024-12-09 10:20:34.336542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.591 [2024-12-09 10:20:34.336654] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:03.591 [2024-12-09 10:20:34.336673] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:03.591 [2024-12-09 10:20:34.336684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:03.591 [2024-12-09 10:20:34.336695] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:03.591 [2024-12-09 10:20:34.336706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:03.591 [2024-12-09 10:20:34.336716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:03.592 [2024-12-09 10:20:34.336726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:03.592 [2024-12-09 10:20:34.336736] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:03.592 [2024-12-09 10:20:34.336746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:03.592 [2024-12-09 10:20:34.336755] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:03.592 [2024-12-09 10:20:34.336765] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:03.592 [2024-12-09 10:20:34.336775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:03.592 [2024-12-09 10:20:34.336784] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:03.592 [2024-12-09 10:20:34.336807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:03.592 [2024-12-09 10:20:34.336817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:03.592 [2024-12-09 10:20:34.336827] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:03.592 [2024-12-09 10:20:34.336839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:03.592 [2024-12-09 10:20:34.336850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:03.592 [2024-12-09 10:20:34.336859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:03.592 [2024-12-09 10:20:34.336869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:03.592 [2024-12-09 10:20:34.336879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:03.592 [2024-12-09 10:20:34.336888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:03.592 [2024-12-09 10:20:34.336898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:03.592 [2024-12-09 10:20:34.336907] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:03.592 [2024-12-09 10:20:34.336933] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:03.592 [2024-12-09 10:20:34.336945] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:03.592 [2024-12-09 10:20:34.336955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:03.592 [2024-12-09 10:20:34.336965] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:03.592 [2024-12-09 10:20:34.336975] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:03.592 [2024-12-09 10:20:34.336984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:03.592 [2024-12-09 10:20:34.336994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:03.592 [2024-12-09 10:20:34.337004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:03.592 [2024-12-09 10:20:34.337016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:03.592 [2024-12-09 10:20:34.337049] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:03.592 [2024-12-09 10:20:34.337068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:03.592 [2024-12-09 10:20:34.337082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:03.592 [2024-12-09 10:20:34.337093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:03.592 [2024-12-09 10:20:34.337107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:03.592 [2024-12-09 10:20:34.337118] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:03.592 [2024-12-09 10:20:34.337129] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:03.592 [2024-12-09 10:20:34.337140] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:03.592 [2024-12-09 10:20:34.337151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:03.592 [2024-12-09 10:20:34.337163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:03.592 [2024-12-09 10:20:34.337173] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:03.592 [2024-12-09 10:20:34.337185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:03.592 [2024-12-09 10:20:34.337197] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:03.592 [2024-12-09 10:20:34.337209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:03.592 [2024-12-09 10:20:34.337221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:03.592 [2024-12-09 10:20:34.337234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:03.592 [2024-12-09 10:20:34.337245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:03.592 [2024-12-09 10:20:34.337257] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:03.592 [2024-12-09 10:20:34.337267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:03.592 [2024-12-09 10:20:34.337278] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:03.592 [2024-12-09 10:20:34.337291] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:03.592 [2024-12-09 10:20:34.337305] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:03.592 [2024-12-09 10:20:34.337324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:03.592 [2024-12-09 10:20:34.337336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:03.592 [2024-12-09 10:20:34.337348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:03.592 [2024-12-09 10:20:34.337375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:03.592 [2024-12-09 10:20:34.337386] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:03.592 [2024-12-09 10:20:34.337411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:03.592 [2024-12-09 10:20:34.337436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:03.592 [2024-12-09 10:20:34.337446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:03.592 [2024-12-09 10:20:34.337456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:03.592 [2024-12-09 10:20:34.337466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:03.592 [2024-12-09 10:20:34.337477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:03.592 [2024-12-09 10:20:34.337487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:03.592 [2024-12-09 10:20:34.337496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:03.592 [2024-12-09 10:20:34.337507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:03.592 [2024-12-09 10:20:34.337517] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:03.592 [2024-12-09 10:20:34.337529] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:03.592 [2024-12-09 10:20:34.337541] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:03.592 [2024-12-09 10:20:34.337552] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:03.592 [2024-12-09 10:20:34.337563] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:03.592 [2024-12-09 10:20:34.337574] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:03.592 [2024-12-09 10:20:34.337585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.592 [2024-12-09 10:20:34.337595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:03.592 [2024-12-09 10:20:34.337607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.990 ms 00:29:03.592 [2024-12-09 10:20:34.337618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.592 [2024-12-09 10:20:34.381258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.592 [2024-12-09 10:20:34.381332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:03.592 [2024-12-09 10:20:34.381354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.574 ms 00:29:03.592 [2024-12-09 10:20:34.381395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.592 [2024-12-09 10:20:34.381546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.592 [2024-12-09 10:20:34.381563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:03.592 [2024-12-09 10:20:34.381577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:29:03.592 [2024-12-09 10:20:34.381588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.851 [2024-12-09 10:20:34.441124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.851 [2024-12-09 10:20:34.441193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:03.851 [2024-12-09 10:20:34.441231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.404 ms 00:29:03.851 [2024-12-09 10:20:34.441244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.851 [2024-12-09 10:20:34.441325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.851 [2024-12-09 10:20:34.441345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:03.851 [2024-12-09 10:20:34.441381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:03.851 [2024-12-09 10:20:34.441393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.851 [2024-12-09 10:20:34.442154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.851 [2024-12-09 10:20:34.442184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:03.851 [2024-12-09 10:20:34.442200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.617 ms 00:29:03.851 [2024-12-09 10:20:34.442213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.851 [2024-12-09 10:20:34.442409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.851 [2024-12-09 10:20:34.442436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:03.851 [2024-12-09 10:20:34.442458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.162 ms 00:29:03.851 [2024-12-09 10:20:34.442470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.851 [2024-12-09 10:20:34.463215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.851 [2024-12-09 10:20:34.463440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:03.851 [2024-12-09 10:20:34.463469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.713 ms 00:29:03.851 [2024-12-09 10:20:34.463482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.851 [2024-12-09 10:20:34.481561] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:29:03.851 [2024-12-09 10:20:34.481602] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:03.851 [2024-12-09 10:20:34.481635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.851 [2024-12-09 10:20:34.481646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:03.851 [2024-12-09 10:20:34.481658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.000 ms 00:29:03.851 [2024-12-09 10:20:34.481668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.851 [2024-12-09 10:20:34.509731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.851 [2024-12-09 10:20:34.509778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:03.851 [2024-12-09 10:20:34.509810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.014 ms 00:29:03.851 [2024-12-09 10:20:34.509822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.851 [2024-12-09 10:20:34.524954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.852 [2024-12-09 10:20:34.524989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:03.852 [2024-12-09 10:20:34.525004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.060 ms 00:29:03.852 [2024-12-09 10:20:34.525013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.852 [2024-12-09 10:20:34.540322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.852 [2024-12-09 10:20:34.540362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:03.852 [2024-12-09 10:20:34.540409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.237 ms 00:29:03.852 [2024-12-09 10:20:34.540435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.852 [2024-12-09 10:20:34.541300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.852 [2024-12-09 10:20:34.541335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:03.852 [2024-12-09 10:20:34.541351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.728 ms 00:29:03.852 [2024-12-09 10:20:34.541368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.852 [2024-12-09 10:20:34.627744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.852 [2024-12-09 10:20:34.627860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:03.852 [2024-12-09 10:20:34.627885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.347 ms 00:29:03.852 [2024-12-09 10:20:34.627915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.852 [2024-12-09 10:20:34.640785] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:03.852 [2024-12-09 10:20:34.644840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.852 [2024-12-09 10:20:34.644915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:03.852 [2024-12-09 10:20:34.644939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.817 ms 00:29:03.852 [2024-12-09 10:20:34.644953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.852 [2024-12-09 10:20:34.645101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.852 [2024-12-09 10:20:34.645124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:03.852 [2024-12-09 10:20:34.645139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:29:03.852 [2024-12-09 10:20:34.645152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.852 [2024-12-09 10:20:34.645331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.852 [2024-12-09 10:20:34.645368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:03.852 [2024-12-09 10:20:34.645398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:29:03.852 [2024-12-09 10:20:34.645410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.852 [2024-12-09 10:20:34.645475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.852 [2024-12-09 10:20:34.645492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:03.852 [2024-12-09 10:20:34.645505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:03.852 [2024-12-09 10:20:34.645516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.852 [2024-12-09 10:20:34.645588] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:03.852 [2024-12-09 10:20:34.645627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.852 [2024-12-09 10:20:34.645653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:03.852 [2024-12-09 10:20:34.645665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:29:03.852 [2024-12-09 10:20:34.645676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.111 [2024-12-09 10:20:34.677958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:04.111 [2024-12-09 10:20:34.678188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:04.111 [2024-12-09 10:20:34.678218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.251 ms 00:29:04.111 [2024-12-09 10:20:34.678252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.111 [2024-12-09 10:20:34.678343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:04.111 [2024-12-09 10:20:34.678362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:04.111 [2024-12-09 10:20:34.678376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:29:04.111 [2024-12-09 10:20:34.678388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.111 [2024-12-09 10:20:34.680449] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 385.824 ms, result 0 00:29:05.045  [2024-12-09T10:20:36.775Z] Copying: 21/1024 [MB] (21 MBps) [2024-12-09T10:20:37.711Z] Copying: 42/1024 [MB] (21 MBps) [2024-12-09T10:20:39.105Z] Copying: 65/1024 [MB] (22 MBps) [2024-12-09T10:20:39.696Z] Copying: 86/1024 [MB] (21 MBps) [2024-12-09T10:20:41.073Z] Copying: 107/1024 [MB] (21 MBps) [2024-12-09T10:20:42.009Z] Copying: 129/1024 [MB] (21 MBps) [2024-12-09T10:20:42.945Z] Copying: 151/1024 [MB] (22 MBps) [2024-12-09T10:20:43.881Z] Copying: 174/1024 [MB] (22 MBps) [2024-12-09T10:20:44.816Z] Copying: 195/1024 [MB] (21 MBps) [2024-12-09T10:20:45.752Z] Copying: 217/1024 [MB] (21 MBps) [2024-12-09T10:20:46.727Z] Copying: 238/1024 [MB] (21 MBps) [2024-12-09T10:20:48.107Z] Copying: 261/1024 [MB] (22 MBps) [2024-12-09T10:20:49.044Z] Copying: 283/1024 [MB] (22 MBps) [2024-12-09T10:20:49.978Z] Copying: 306/1024 [MB] (22 MBps) [2024-12-09T10:20:50.914Z] Copying: 329/1024 [MB] (22 MBps) [2024-12-09T10:20:51.850Z] Copying: 352/1024 [MB] (22 MBps) [2024-12-09T10:20:52.785Z] Copying: 373/1024 [MB] (21 MBps) [2024-12-09T10:20:53.720Z] Copying: 394/1024 [MB] (21 MBps) [2024-12-09T10:20:55.097Z] Copying: 415/1024 [MB] (21 MBps) [2024-12-09T10:20:56.060Z] Copying: 437/1024 [MB] (21 MBps) [2024-12-09T10:20:56.996Z] Copying: 459/1024 [MB] (21 MBps) [2024-12-09T10:20:57.931Z] Copying: 480/1024 [MB] (21 MBps) [2024-12-09T10:20:58.865Z] Copying: 502/1024 [MB] (22 MBps) [2024-12-09T10:20:59.801Z] Copying: 525/1024 [MB] (23 MBps) [2024-12-09T10:21:00.735Z] Copying: 551/1024 [MB] (25 MBps) [2024-12-09T10:21:02.111Z] Copying: 576/1024 [MB] (25 MBps) [2024-12-09T10:21:03.078Z] Copying: 602/1024 [MB] (25 MBps) [2024-12-09T10:21:04.028Z] Copying: 627/1024 [MB] (25 MBps) [2024-12-09T10:21:04.974Z] Copying: 651/1024 [MB] (23 MBps) [2024-12-09T10:21:05.911Z] Copying: 675/1024 [MB] (24 MBps) [2024-12-09T10:21:06.847Z] Copying: 700/1024 [MB] (24 MBps) [2024-12-09T10:21:07.784Z] Copying: 725/1024 [MB] (25 MBps) [2024-12-09T10:21:08.717Z] Copying: 750/1024 [MB] (25 MBps) [2024-12-09T10:21:10.092Z] Copying: 775/1024 [MB] (25 MBps) [2024-12-09T10:21:10.705Z] Copying: 799/1024 [MB] (23 MBps) [2024-12-09T10:21:12.084Z] Copying: 828/1024 [MB] (29 MBps) [2024-12-09T10:21:13.021Z] Copying: 854/1024 [MB] (26 MBps) [2024-12-09T10:21:13.957Z] Copying: 877/1024 [MB] (23 MBps) [2024-12-09T10:21:14.892Z] Copying: 901/1024 [MB] (23 MBps) [2024-12-09T10:21:15.830Z] Copying: 924/1024 [MB] (23 MBps) [2024-12-09T10:21:16.768Z] Copying: 947/1024 [MB] (23 MBps) [2024-12-09T10:21:17.721Z] Copying: 970/1024 [MB] (22 MBps) [2024-12-09T10:21:19.099Z] Copying: 993/1024 [MB] (22 MBps) [2024-12-09T10:21:19.099Z] Copying: 1016/1024 [MB] (22 MBps) [2024-12-09T10:21:19.099Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-12-09 10:21:19.022346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:48.302 [2024-12-09 10:21:19.022445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:48.302 [2024-12-09 10:21:19.022469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:48.302 [2024-12-09 10:21:19.022483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.302 [2024-12-09 10:21:19.022519] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:48.302 [2024-12-09 10:21:19.026720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:48.302 [2024-12-09 10:21:19.026754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:48.302 [2024-12-09 10:21:19.026791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.177 ms 00:29:48.302 [2024-12-09 10:21:19.026801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.302 [2024-12-09 10:21:19.028913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:48.302 [2024-12-09 10:21:19.028959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:48.302 [2024-12-09 10:21:19.028990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.084 ms 00:29:48.302 [2024-12-09 10:21:19.029000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.302 [2024-12-09 10:21:19.047728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:48.302 [2024-12-09 10:21:19.047766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:48.302 [2024-12-09 10:21:19.047798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.709 ms 00:29:48.302 [2024-12-09 10:21:19.047808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.302 [2024-12-09 10:21:19.054150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:48.302 [2024-12-09 10:21:19.054185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:48.302 [2024-12-09 10:21:19.054201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.265 ms 00:29:48.302 [2024-12-09 10:21:19.054213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.302 [2024-12-09 10:21:19.086280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:48.303 [2024-12-09 10:21:19.086324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:48.303 [2024-12-09 10:21:19.086341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.972 ms 00:29:48.303 [2024-12-09 10:21:19.086354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.563 [2024-12-09 10:21:19.105012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:48.563 [2024-12-09 10:21:19.105050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:48.563 [2024-12-09 10:21:19.105082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.605 ms 00:29:48.563 [2024-12-09 10:21:19.105093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.563 [2024-12-09 10:21:19.105282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:48.563 [2024-12-09 10:21:19.105308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:48.563 [2024-12-09 10:21:19.105322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:29:48.563 [2024-12-09 10:21:19.105334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.563 [2024-12-09 10:21:19.135952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:48.563 [2024-12-09 10:21:19.136158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:48.563 [2024-12-09 10:21:19.136186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.598 ms 00:29:48.563 [2024-12-09 10:21:19.136197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.563 [2024-12-09 10:21:19.166514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:48.563 [2024-12-09 10:21:19.166602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:48.563 [2024-12-09 10:21:19.166650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.258 ms 00:29:48.563 [2024-12-09 10:21:19.166660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.563 [2024-12-09 10:21:19.196872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:48.564 [2024-12-09 10:21:19.196949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:48.564 [2024-12-09 10:21:19.196981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.171 ms 00:29:48.564 [2024-12-09 10:21:19.196991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.564 [2024-12-09 10:21:19.226854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:48.564 [2024-12-09 10:21:19.226914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:48.564 [2024-12-09 10:21:19.226947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.755 ms 00:29:48.564 [2024-12-09 10:21:19.226958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.564 [2024-12-09 10:21:19.227008] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:48.564 [2024-12-09 10:21:19.227048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.227079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.227091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.227103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.227114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.227125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.227136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.227147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.227160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.227170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.227181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.227192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.227203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.227214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.227224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.227252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.227280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.227292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.227303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.227315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.227328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.227339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.227351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.227363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.227376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.227388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.227399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.227411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.227423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.227435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.227447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.227459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.227470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.227484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.227497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.227510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.227522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.227534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.227546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.227558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.227570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.227582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.227594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.227606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.227619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.227631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.227643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.227655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.227681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.227693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.227705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.227717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.227728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.227740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.227752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.227763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.227775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.227786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.227798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.227809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.227821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.227832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.227844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.227872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.227884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.227911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.227925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.227937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.227949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.227961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.227988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.228000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.228012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.228024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.228036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.228048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.228060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.228071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.228083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.228095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.228105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.228117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.228129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:48.564 [2024-12-09 10:21:19.228141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:48.565 [2024-12-09 10:21:19.228152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:48.565 [2024-12-09 10:21:19.228175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:48.565 [2024-12-09 10:21:19.228186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:48.565 [2024-12-09 10:21:19.228198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:48.565 [2024-12-09 10:21:19.228209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:48.565 [2024-12-09 10:21:19.228221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:48.565 [2024-12-09 10:21:19.228248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:48.565 [2024-12-09 10:21:19.228260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:48.565 [2024-12-09 10:21:19.228272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:48.565 [2024-12-09 10:21:19.228285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:48.565 [2024-12-09 10:21:19.228296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:48.565 [2024-12-09 10:21:19.228308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:48.565 [2024-12-09 10:21:19.228320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:48.565 [2024-12-09 10:21:19.228333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:48.565 [2024-12-09 10:21:19.228346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:48.565 [2024-12-09 10:21:19.228358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:48.565 [2024-12-09 10:21:19.228379] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:48.565 [2024-12-09 10:21:19.228403] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c37a29d1-b9ea-48f7-b142-8cb3a1c0b0d7 00:29:48.565 [2024-12-09 10:21:19.228416] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:48.565 [2024-12-09 10:21:19.228428] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:48.565 [2024-12-09 10:21:19.228439] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:48.565 [2024-12-09 10:21:19.228451] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:48.565 [2024-12-09 10:21:19.228462] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:48.565 [2024-12-09 10:21:19.228493] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:48.565 [2024-12-09 10:21:19.228506] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:48.565 [2024-12-09 10:21:19.228516] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:48.565 [2024-12-09 10:21:19.228526] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:48.565 [2024-12-09 10:21:19.228537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:48.565 [2024-12-09 10:21:19.228549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:48.565 [2024-12-09 10:21:19.228562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.531 ms 00:29:48.565 [2024-12-09 10:21:19.228573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.565 [2024-12-09 10:21:19.245988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:48.565 [2024-12-09 10:21:19.246021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:48.565 [2024-12-09 10:21:19.246052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.332 ms 00:29:48.565 [2024-12-09 10:21:19.246062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.565 [2024-12-09 10:21:19.246629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:48.565 [2024-12-09 10:21:19.246657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:48.565 [2024-12-09 10:21:19.246671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.543 ms 00:29:48.565 [2024-12-09 10:21:19.246698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.565 [2024-12-09 10:21:19.295227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:48.565 [2024-12-09 10:21:19.295442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:48.565 [2024-12-09 10:21:19.295472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:48.565 [2024-12-09 10:21:19.295497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.565 [2024-12-09 10:21:19.295582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:48.565 [2024-12-09 10:21:19.295599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:48.565 [2024-12-09 10:21:19.295612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:48.565 [2024-12-09 10:21:19.295647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.565 [2024-12-09 10:21:19.295763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:48.565 [2024-12-09 10:21:19.295798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:48.565 [2024-12-09 10:21:19.295810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:48.565 [2024-12-09 10:21:19.295821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.565 [2024-12-09 10:21:19.295875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:48.565 [2024-12-09 10:21:19.295916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:48.565 [2024-12-09 10:21:19.295932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:48.565 [2024-12-09 10:21:19.295944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.825 [2024-12-09 10:21:19.409829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:48.825 [2024-12-09 10:21:19.409932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:48.825 [2024-12-09 10:21:19.409969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:48.825 [2024-12-09 10:21:19.409982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.825 [2024-12-09 10:21:19.500863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:48.825 [2024-12-09 10:21:19.500931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:48.825 [2024-12-09 10:21:19.500951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:48.825 [2024-12-09 10:21:19.500979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.825 [2024-12-09 10:21:19.501090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:48.825 [2024-12-09 10:21:19.501109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:48.825 [2024-12-09 10:21:19.501122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:48.825 [2024-12-09 10:21:19.501143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.825 [2024-12-09 10:21:19.501215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:48.825 [2024-12-09 10:21:19.501230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:48.825 [2024-12-09 10:21:19.501260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:48.825 [2024-12-09 10:21:19.501271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.825 [2024-12-09 10:21:19.501400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:48.825 [2024-12-09 10:21:19.501419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:48.825 [2024-12-09 10:21:19.501432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:48.825 [2024-12-09 10:21:19.501444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.825 [2024-12-09 10:21:19.501493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:48.825 [2024-12-09 10:21:19.501517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:48.825 [2024-12-09 10:21:19.501537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:48.825 [2024-12-09 10:21:19.501562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.825 [2024-12-09 10:21:19.501660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:48.825 [2024-12-09 10:21:19.501674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:48.825 [2024-12-09 10:21:19.501685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:48.825 [2024-12-09 10:21:19.501696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.825 [2024-12-09 10:21:19.501749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:48.825 [2024-12-09 10:21:19.501765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:48.825 [2024-12-09 10:21:19.501776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:48.825 [2024-12-09 10:21:19.501788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.825 [2024-12-09 10:21:19.501968] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 479.566 ms, result 0 00:29:50.203 00:29:50.203 00:29:50.203 10:21:20 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:29:50.203 [2024-12-09 10:21:20.758849] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:29:50.203 [2024-12-09 10:21:20.759042] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80363 ] 00:29:50.203 [2024-12-09 10:21:20.941242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:50.461 [2024-12-09 10:21:21.066841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:50.720 [2024-12-09 10:21:21.419794] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:50.720 [2024-12-09 10:21:21.419914] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:50.980 [2024-12-09 10:21:21.580948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.980 [2024-12-09 10:21:21.581011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:50.980 [2024-12-09 10:21:21.581049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:50.980 [2024-12-09 10:21:21.581060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.980 [2024-12-09 10:21:21.581122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.980 [2024-12-09 10:21:21.581163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:50.980 [2024-12-09 10:21:21.581176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:29:50.980 [2024-12-09 10:21:21.581186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.980 [2024-12-09 10:21:21.581216] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:50.980 [2024-12-09 10:21:21.582234] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:50.980 [2024-12-09 10:21:21.582279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.980 [2024-12-09 10:21:21.582295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:50.980 [2024-12-09 10:21:21.582309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.070 ms 00:29:50.980 [2024-12-09 10:21:21.582320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.980 [2024-12-09 10:21:21.584751] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:50.980 [2024-12-09 10:21:21.600929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.980 [2024-12-09 10:21:21.600974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:50.980 [2024-12-09 10:21:21.601006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.180 ms 00:29:50.980 [2024-12-09 10:21:21.601017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.980 [2024-12-09 10:21:21.601088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.980 [2024-12-09 10:21:21.601106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:50.980 [2024-12-09 10:21:21.601118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:29:50.980 [2024-12-09 10:21:21.601146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.980 [2024-12-09 10:21:21.611249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.980 [2024-12-09 10:21:21.611305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:50.980 [2024-12-09 10:21:21.611337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.020 ms 00:29:50.980 [2024-12-09 10:21:21.611353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.980 [2024-12-09 10:21:21.611448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.980 [2024-12-09 10:21:21.611465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:50.980 [2024-12-09 10:21:21.611477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:29:50.980 [2024-12-09 10:21:21.611488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.980 [2024-12-09 10:21:21.611559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.980 [2024-12-09 10:21:21.611576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:50.980 [2024-12-09 10:21:21.611589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:29:50.980 [2024-12-09 10:21:21.611599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.980 [2024-12-09 10:21:21.611653] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:50.980 [2024-12-09 10:21:21.616306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.980 [2024-12-09 10:21:21.616339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:50.980 [2024-12-09 10:21:21.616375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.662 ms 00:29:50.980 [2024-12-09 10:21:21.616385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.980 [2024-12-09 10:21:21.616422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.980 [2024-12-09 10:21:21.616436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:50.980 [2024-12-09 10:21:21.616447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:29:50.980 [2024-12-09 10:21:21.616458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.980 [2024-12-09 10:21:21.616508] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:50.981 [2024-12-09 10:21:21.616541] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:50.981 [2024-12-09 10:21:21.616578] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:50.981 [2024-12-09 10:21:21.616601] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:50.981 [2024-12-09 10:21:21.616714] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:50.981 [2024-12-09 10:21:21.616728] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:50.981 [2024-12-09 10:21:21.616742] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:50.981 [2024-12-09 10:21:21.616755] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:50.981 [2024-12-09 10:21:21.616767] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:50.981 [2024-12-09 10:21:21.616778] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:50.981 [2024-12-09 10:21:21.616788] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:50.981 [2024-12-09 10:21:21.616803] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:50.981 [2024-12-09 10:21:21.616812] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:50.981 [2024-12-09 10:21:21.616823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.981 [2024-12-09 10:21:21.616834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:50.981 [2024-12-09 10:21:21.616844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.325 ms 00:29:50.981 [2024-12-09 10:21:21.616854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.981 [2024-12-09 10:21:21.616951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.981 [2024-12-09 10:21:21.616965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:50.981 [2024-12-09 10:21:21.616976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:29:50.981 [2024-12-09 10:21:21.616986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.981 [2024-12-09 10:21:21.617095] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:50.981 [2024-12-09 10:21:21.617113] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:50.981 [2024-12-09 10:21:21.617136] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:50.981 [2024-12-09 10:21:21.617147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:50.981 [2024-12-09 10:21:21.617164] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:50.981 [2024-12-09 10:21:21.617174] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:50.981 [2024-12-09 10:21:21.617183] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:50.981 [2024-12-09 10:21:21.617194] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:50.981 [2024-12-09 10:21:21.617203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:50.981 [2024-12-09 10:21:21.617212] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:50.981 [2024-12-09 10:21:21.617221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:50.981 [2024-12-09 10:21:21.617230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:50.981 [2024-12-09 10:21:21.617239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:50.981 [2024-12-09 10:21:21.617260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:50.981 [2024-12-09 10:21:21.617270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:50.981 [2024-12-09 10:21:21.617281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:50.981 [2024-12-09 10:21:21.617291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:50.981 [2024-12-09 10:21:21.617302] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:50.981 [2024-12-09 10:21:21.617311] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:50.981 [2024-12-09 10:21:21.617321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:50.981 [2024-12-09 10:21:21.617330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:50.981 [2024-12-09 10:21:21.617340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:50.981 [2024-12-09 10:21:21.617349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:50.981 [2024-12-09 10:21:21.617358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:50.981 [2024-12-09 10:21:21.617367] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:50.981 [2024-12-09 10:21:21.617377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:50.981 [2024-12-09 10:21:21.617386] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:50.981 [2024-12-09 10:21:21.617395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:50.981 [2024-12-09 10:21:21.617405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:50.981 [2024-12-09 10:21:21.617414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:50.981 [2024-12-09 10:21:21.617423] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:50.981 [2024-12-09 10:21:21.617432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:50.981 [2024-12-09 10:21:21.617442] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:50.981 [2024-12-09 10:21:21.617451] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:50.981 [2024-12-09 10:21:21.617460] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:50.981 [2024-12-09 10:21:21.617470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:50.981 [2024-12-09 10:21:21.617479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:50.981 [2024-12-09 10:21:21.617488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:50.981 [2024-12-09 10:21:21.617497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:50.981 [2024-12-09 10:21:21.617506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:50.981 [2024-12-09 10:21:21.617514] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:50.981 [2024-12-09 10:21:21.617523] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:50.981 [2024-12-09 10:21:21.617533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:50.981 [2024-12-09 10:21:21.617543] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:50.981 [2024-12-09 10:21:21.617553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:50.981 [2024-12-09 10:21:21.617563] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:50.981 [2024-12-09 10:21:21.617573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:50.981 [2024-12-09 10:21:21.617584] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:50.981 [2024-12-09 10:21:21.617594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:50.981 [2024-12-09 10:21:21.617604] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:50.981 [2024-12-09 10:21:21.617613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:50.981 [2024-12-09 10:21:21.617623] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:50.981 [2024-12-09 10:21:21.617632] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:50.981 [2024-12-09 10:21:21.617644] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:50.981 [2024-12-09 10:21:21.617657] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:50.981 [2024-12-09 10:21:21.617673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:50.981 [2024-12-09 10:21:21.617684] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:50.981 [2024-12-09 10:21:21.617694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:50.981 [2024-12-09 10:21:21.617704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:50.981 [2024-12-09 10:21:21.617714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:50.981 [2024-12-09 10:21:21.617724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:50.981 [2024-12-09 10:21:21.617734] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:50.981 [2024-12-09 10:21:21.617744] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:50.981 [2024-12-09 10:21:21.617754] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:50.981 [2024-12-09 10:21:21.617764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:50.981 [2024-12-09 10:21:21.617773] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:50.981 [2024-12-09 10:21:21.617783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:50.981 [2024-12-09 10:21:21.617792] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:50.981 [2024-12-09 10:21:21.617802] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:50.981 [2024-12-09 10:21:21.617812] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:50.982 [2024-12-09 10:21:21.617823] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:50.982 [2024-12-09 10:21:21.617850] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:50.982 [2024-12-09 10:21:21.617861] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:50.982 [2024-12-09 10:21:21.617871] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:50.982 [2024-12-09 10:21:21.617881] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:50.982 [2024-12-09 10:21:21.617892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.982 [2024-12-09 10:21:21.617902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:50.982 [2024-12-09 10:21:21.617912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.855 ms 00:29:50.982 [2024-12-09 10:21:21.617923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.982 [2024-12-09 10:21:21.658935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.982 [2024-12-09 10:21:21.659293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:50.982 [2024-12-09 10:21:21.659427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.950 ms 00:29:50.982 [2024-12-09 10:21:21.659488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.982 [2024-12-09 10:21:21.659746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.982 [2024-12-09 10:21:21.659800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:50.982 [2024-12-09 10:21:21.660033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:29:50.982 [2024-12-09 10:21:21.660087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.982 [2024-12-09 10:21:21.714344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.982 [2024-12-09 10:21:21.714654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:50.982 [2024-12-09 10:21:21.714788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.088 ms 00:29:50.982 [2024-12-09 10:21:21.714880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.982 [2024-12-09 10:21:21.715151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.982 [2024-12-09 10:21:21.715206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:50.982 [2024-12-09 10:21:21.715284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:50.982 [2024-12-09 10:21:21.715488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.982 [2024-12-09 10:21:21.716426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.982 [2024-12-09 10:21:21.716588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:50.982 [2024-12-09 10:21:21.716722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.773 ms 00:29:50.982 [2024-12-09 10:21:21.716846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.982 [2024-12-09 10:21:21.717059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.982 [2024-12-09 10:21:21.717119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:50.982 [2024-12-09 10:21:21.717226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:29:50.982 [2024-12-09 10:21:21.717287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.982 [2024-12-09 10:21:21.736766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.982 [2024-12-09 10:21:21.736995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:50.982 [2024-12-09 10:21:21.737025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.303 ms 00:29:50.982 [2024-12-09 10:21:21.737039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.982 [2024-12-09 10:21:21.753388] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:50.982 [2024-12-09 10:21:21.753428] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:50.982 [2024-12-09 10:21:21.753461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.982 [2024-12-09 10:21:21.753473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:50.982 [2024-12-09 10:21:21.753485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.279 ms 00:29:50.982 [2024-12-09 10:21:21.753495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.241 [2024-12-09 10:21:21.781663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.241 [2024-12-09 10:21:21.781740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:51.241 [2024-12-09 10:21:21.781774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.126 ms 00:29:51.241 [2024-12-09 10:21:21.781786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.241 [2024-12-09 10:21:21.797320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.241 [2024-12-09 10:21:21.797363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:51.241 [2024-12-09 10:21:21.797380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.434 ms 00:29:51.241 [2024-12-09 10:21:21.797391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.241 [2024-12-09 10:21:21.812949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.241 [2024-12-09 10:21:21.812999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:51.241 [2024-12-09 10:21:21.813030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.514 ms 00:29:51.241 [2024-12-09 10:21:21.813040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.241 [2024-12-09 10:21:21.813903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.241 [2024-12-09 10:21:21.813969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:51.241 [2024-12-09 10:21:21.813993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.747 ms 00:29:51.241 [2024-12-09 10:21:21.814004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.241 [2024-12-09 10:21:21.895409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.241 [2024-12-09 10:21:21.895814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:51.241 [2024-12-09 10:21:21.895875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.376 ms 00:29:51.242 [2024-12-09 10:21:21.895891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.242 [2024-12-09 10:21:21.908570] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:51.242 [2024-12-09 10:21:21.912399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.242 [2024-12-09 10:21:21.912436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:51.242 [2024-12-09 10:21:21.912454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.440 ms 00:29:51.242 [2024-12-09 10:21:21.912467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.242 [2024-12-09 10:21:21.912589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.242 [2024-12-09 10:21:21.912610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:51.242 [2024-12-09 10:21:21.912629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:51.242 [2024-12-09 10:21:21.912641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.242 [2024-12-09 10:21:21.912753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.242 [2024-12-09 10:21:21.912771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:51.242 [2024-12-09 10:21:21.912791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:29:51.242 [2024-12-09 10:21:21.912803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.242 [2024-12-09 10:21:21.912875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.242 [2024-12-09 10:21:21.912894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:51.242 [2024-12-09 10:21:21.912907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:29:51.242 [2024-12-09 10:21:21.912920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.242 [2024-12-09 10:21:21.913006] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:51.242 [2024-12-09 10:21:21.913025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.242 [2024-12-09 10:21:21.913037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:51.242 [2024-12-09 10:21:21.913051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:29:51.242 [2024-12-09 10:21:21.913063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.242 [2024-12-09 10:21:21.945778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.242 [2024-12-09 10:21:21.945818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:51.242 [2024-12-09 10:21:21.945895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.685 ms 00:29:51.242 [2024-12-09 10:21:21.945907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.242 [2024-12-09 10:21:21.946002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.242 [2024-12-09 10:21:21.946020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:51.242 [2024-12-09 10:21:21.946033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:29:51.242 [2024-12-09 10:21:21.946043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.242 [2024-12-09 10:21:21.947857] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 366.300 ms, result 0 00:29:52.619  [2024-12-09T10:21:24.360Z] Copying: 22/1024 [MB] (22 MBps) [2024-12-09T10:21:25.320Z] Copying: 45/1024 [MB] (22 MBps) [2024-12-09T10:21:26.259Z] Copying: 67/1024 [MB] (22 MBps) [2024-12-09T10:21:27.195Z] Copying: 89/1024 [MB] (22 MBps) [2024-12-09T10:21:28.572Z] Copying: 112/1024 [MB] (22 MBps) [2024-12-09T10:21:29.153Z] Copying: 134/1024 [MB] (22 MBps) [2024-12-09T10:21:30.529Z] Copying: 156/1024 [MB] (22 MBps) [2024-12-09T10:21:31.464Z] Copying: 179/1024 [MB] (23 MBps) [2024-12-09T10:21:32.400Z] Copying: 204/1024 [MB] (24 MBps) [2024-12-09T10:21:33.337Z] Copying: 227/1024 [MB] (23 MBps) [2024-12-09T10:21:34.273Z] Copying: 251/1024 [MB] (23 MBps) [2024-12-09T10:21:35.207Z] Copying: 275/1024 [MB] (24 MBps) [2024-12-09T10:21:36.141Z] Copying: 300/1024 [MB] (24 MBps) [2024-12-09T10:21:37.516Z] Copying: 324/1024 [MB] (24 MBps) [2024-12-09T10:21:38.478Z] Copying: 350/1024 [MB] (25 MBps) [2024-12-09T10:21:39.421Z] Copying: 376/1024 [MB] (26 MBps) [2024-12-09T10:21:40.355Z] Copying: 403/1024 [MB] (26 MBps) [2024-12-09T10:21:41.288Z] Copying: 429/1024 [MB] (26 MBps) [2024-12-09T10:21:42.221Z] Copying: 458/1024 [MB] (28 MBps) [2024-12-09T10:21:43.154Z] Copying: 487/1024 [MB] (29 MBps) [2024-12-09T10:21:44.529Z] Copying: 516/1024 [MB] (28 MBps) [2024-12-09T10:21:45.463Z] Copying: 545/1024 [MB] (28 MBps) [2024-12-09T10:21:46.397Z] Copying: 574/1024 [MB] (29 MBps) [2024-12-09T10:21:47.331Z] Copying: 602/1024 [MB] (27 MBps) [2024-12-09T10:21:48.265Z] Copying: 629/1024 [MB] (26 MBps) [2024-12-09T10:21:49.231Z] Copying: 657/1024 [MB] (28 MBps) [2024-12-09T10:21:50.167Z] Copying: 685/1024 [MB] (28 MBps) [2024-12-09T10:21:51.543Z] Copying: 713/1024 [MB] (28 MBps) [2024-12-09T10:21:52.478Z] Copying: 737/1024 [MB] (24 MBps) [2024-12-09T10:21:53.414Z] Copying: 766/1024 [MB] (28 MBps) [2024-12-09T10:21:54.349Z] Copying: 791/1024 [MB] (24 MBps) [2024-12-09T10:21:55.288Z] Copying: 817/1024 [MB] (26 MBps) [2024-12-09T10:21:56.221Z] Copying: 843/1024 [MB] (26 MBps) [2024-12-09T10:21:57.156Z] Copying: 870/1024 [MB] (26 MBps) [2024-12-09T10:21:58.531Z] Copying: 893/1024 [MB] (23 MBps) [2024-12-09T10:21:59.171Z] Copying: 918/1024 [MB] (25 MBps) [2024-12-09T10:22:00.548Z] Copying: 945/1024 [MB] (27 MBps) [2024-12-09T10:22:01.484Z] Copying: 974/1024 [MB] (28 MBps) [2024-12-09T10:22:02.051Z] Copying: 1002/1024 [MB] (28 MBps) [2024-12-09T10:22:02.311Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-12-09 10:22:02.058611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:31.514 [2024-12-09 10:22:02.058949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:31.514 [2024-12-09 10:22:02.059076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:31.514 [2024-12-09 10:22:02.059126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:31.514 [2024-12-09 10:22:02.059190] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:31.514 [2024-12-09 10:22:02.063052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:31.514 [2024-12-09 10:22:02.063100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:31.514 [2024-12-09 10:22:02.063116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.707 ms 00:30:31.514 [2024-12-09 10:22:02.063127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:31.514 [2024-12-09 10:22:02.063941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:31.514 [2024-12-09 10:22:02.064009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:31.514 [2024-12-09 10:22:02.064057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.783 ms 00:30:31.514 [2024-12-09 10:22:02.064095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:31.514 [2024-12-09 10:22:02.067163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:31.514 [2024-12-09 10:22:02.067316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:31.514 [2024-12-09 10:22:02.067425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.944 ms 00:30:31.514 [2024-12-09 10:22:02.067482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:31.514 [2024-12-09 10:22:02.073281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:31.514 [2024-12-09 10:22:02.073433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:31.514 [2024-12-09 10:22:02.073544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.633 ms 00:30:31.514 [2024-12-09 10:22:02.073591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:31.514 [2024-12-09 10:22:02.101380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:31.514 [2024-12-09 10:22:02.101421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:31.514 [2024-12-09 10:22:02.101438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.513 ms 00:30:31.514 [2024-12-09 10:22:02.101449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:31.514 [2024-12-09 10:22:02.118027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:31.514 [2024-12-09 10:22:02.118187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:31.514 [2024-12-09 10:22:02.118295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.553 ms 00:30:31.514 [2024-12-09 10:22:02.118488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:31.514 [2024-12-09 10:22:02.118691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:31.514 [2024-12-09 10:22:02.118751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:31.514 [2024-12-09 10:22:02.118876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:30:31.514 [2024-12-09 10:22:02.118925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:31.514 [2024-12-09 10:22:02.145624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:31.514 [2024-12-09 10:22:02.145797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:31.514 [2024-12-09 10:22:02.145921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.646 ms 00:30:31.514 [2024-12-09 10:22:02.145970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:31.514 [2024-12-09 10:22:02.172202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:31.514 [2024-12-09 10:22:02.172369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:31.514 [2024-12-09 10:22:02.172479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.178 ms 00:30:31.514 [2024-12-09 10:22:02.172526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:31.514 [2024-12-09 10:22:02.198313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:31.514 [2024-12-09 10:22:02.198475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:31.514 [2024-12-09 10:22:02.198602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.729 ms 00:30:31.514 [2024-12-09 10:22:02.198649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:31.514 [2024-12-09 10:22:02.224770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:31.514 [2024-12-09 10:22:02.225011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:31.514 [2024-12-09 10:22:02.225132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.958 ms 00:30:31.514 [2024-12-09 10:22:02.225182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:31.514 [2024-12-09 10:22:02.225241] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:31.514 [2024-12-09 10:22:02.225380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:31.514 [2024-12-09 10:22:02.225455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:31.514 [2024-12-09 10:22:02.225566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:31.514 [2024-12-09 10:22:02.225691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:31.514 [2024-12-09 10:22:02.225824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:31.514 [2024-12-09 10:22:02.226076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:31.514 [2024-12-09 10:22:02.226171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:31.514 [2024-12-09 10:22:02.226237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:31.514 [2024-12-09 10:22:02.226478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:31.514 [2024-12-09 10:22:02.226537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:31.514 [2024-12-09 10:22:02.226654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:31.514 [2024-12-09 10:22:02.226718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:31.514 [2024-12-09 10:22:02.226772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:31.514 [2024-12-09 10:22:02.226899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:31.514 [2024-12-09 10:22:02.226960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:31.514 [2024-12-09 10:22:02.227013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:31.514 [2024-12-09 10:22:02.227165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:31.514 [2024-12-09 10:22:02.227311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:31.514 [2024-12-09 10:22:02.227464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:31.514 [2024-12-09 10:22:02.227532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:31.514 [2024-12-09 10:22:02.227639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:31.514 [2024-12-09 10:22:02.227657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:31.514 [2024-12-09 10:22:02.227669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:31.514 [2024-12-09 10:22:02.227681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:31.514 [2024-12-09 10:22:02.227693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:31.514 [2024-12-09 10:22:02.227704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:31.514 [2024-12-09 10:22:02.227715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:31.514 [2024-12-09 10:22:02.227726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:31.514 [2024-12-09 10:22:02.227737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:31.514 [2024-12-09 10:22:02.227748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:31.514 [2024-12-09 10:22:02.227760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:31.514 [2024-12-09 10:22:02.227770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:31.514 [2024-12-09 10:22:02.227781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:31.514 [2024-12-09 10:22:02.227792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:31.514 [2024-12-09 10:22:02.227803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:31.514 [2024-12-09 10:22:02.227814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:31.514 [2024-12-09 10:22:02.227825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:31.514 [2024-12-09 10:22:02.228072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:31.514 [2024-12-09 10:22:02.228089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:31.514 [2024-12-09 10:22:02.228102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:31.514 [2024-12-09 10:22:02.228113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:31.514 [2024-12-09 10:22:02.228123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:31.514 [2024-12-09 10:22:02.228135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:31.514 [2024-12-09 10:22:02.228146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:31.514 [2024-12-09 10:22:02.228157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:31.515 [2024-12-09 10:22:02.228168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:31.515 [2024-12-09 10:22:02.228179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:31.515 [2024-12-09 10:22:02.228190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:31.515 [2024-12-09 10:22:02.228201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:31.515 [2024-12-09 10:22:02.228214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:31.515 [2024-12-09 10:22:02.228226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:31.515 [2024-12-09 10:22:02.228237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:31.515 [2024-12-09 10:22:02.228257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:31.515 [2024-12-09 10:22:02.228267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:31.515 [2024-12-09 10:22:02.228277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:31.515 [2024-12-09 10:22:02.228287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:31.515 [2024-12-09 10:22:02.228298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:31.515 [2024-12-09 10:22:02.228309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:31.515 [2024-12-09 10:22:02.228321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:31.515 [2024-12-09 10:22:02.228347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:31.515 [2024-12-09 10:22:02.228359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:31.515 [2024-12-09 10:22:02.228369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:31.515 [2024-12-09 10:22:02.228380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:31.515 [2024-12-09 10:22:02.228390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:31.515 [2024-12-09 10:22:02.228401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:31.515 [2024-12-09 10:22:02.228412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:31.515 [2024-12-09 10:22:02.228423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:31.515 [2024-12-09 10:22:02.228435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:31.515 [2024-12-09 10:22:02.228445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:31.515 [2024-12-09 10:22:02.228456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:31.515 [2024-12-09 10:22:02.228472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:31.515 [2024-12-09 10:22:02.228483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:31.515 [2024-12-09 10:22:02.228493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:31.515 [2024-12-09 10:22:02.228504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:31.515 [2024-12-09 10:22:02.228514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:31.515 [2024-12-09 10:22:02.228525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:31.515 [2024-12-09 10:22:02.228535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:31.515 [2024-12-09 10:22:02.228547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:31.515 [2024-12-09 10:22:02.228558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:31.515 [2024-12-09 10:22:02.228568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:31.515 [2024-12-09 10:22:02.228579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:31.515 [2024-12-09 10:22:02.228590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:31.515 [2024-12-09 10:22:02.228600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:31.515 [2024-12-09 10:22:02.228612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:31.515 [2024-12-09 10:22:02.228623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:31.515 [2024-12-09 10:22:02.228633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:31.515 [2024-12-09 10:22:02.228644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:31.515 [2024-12-09 10:22:02.228655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:31.515 [2024-12-09 10:22:02.228666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:31.515 [2024-12-09 10:22:02.228676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:31.515 [2024-12-09 10:22:02.228687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:31.515 [2024-12-09 10:22:02.228698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:31.515 [2024-12-09 10:22:02.228710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:31.515 [2024-12-09 10:22:02.228721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:31.515 [2024-12-09 10:22:02.228732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:31.515 [2024-12-09 10:22:02.228743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:31.515 [2024-12-09 10:22:02.228753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:31.515 [2024-12-09 10:22:02.228764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:31.515 [2024-12-09 10:22:02.228775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:31.515 [2024-12-09 10:22:02.228786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:31.515 [2024-12-09 10:22:02.228805] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:31.515 [2024-12-09 10:22:02.228816] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c37a29d1-b9ea-48f7-b142-8cb3a1c0b0d7 00:30:31.515 [2024-12-09 10:22:02.228827] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:31.515 [2024-12-09 10:22:02.228837] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:30:31.515 [2024-12-09 10:22:02.228865] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:31.515 [2024-12-09 10:22:02.228877] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:31.515 [2024-12-09 10:22:02.228902] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:31.515 [2024-12-09 10:22:02.228913] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:31.515 [2024-12-09 10:22:02.228924] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:31.515 [2024-12-09 10:22:02.228933] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:31.515 [2024-12-09 10:22:02.228943] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:31.515 [2024-12-09 10:22:02.228961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:31.515 [2024-12-09 10:22:02.228972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:31.515 [2024-12-09 10:22:02.228985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.715 ms 00:30:31.515 [2024-12-09 10:22:02.229000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:31.515 [2024-12-09 10:22:02.243702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:31.515 [2024-12-09 10:22:02.243871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:31.515 [2024-12-09 10:22:02.243899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.657 ms 00:30:31.515 [2024-12-09 10:22:02.243911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:31.515 [2024-12-09 10:22:02.244385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:31.515 [2024-12-09 10:22:02.244409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:31.515 [2024-12-09 10:22:02.244446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.449 ms 00:30:31.515 [2024-12-09 10:22:02.244458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:31.515 [2024-12-09 10:22:02.285749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:31.515 [2024-12-09 10:22:02.285795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:31.515 [2024-12-09 10:22:02.285818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:31.515 [2024-12-09 10:22:02.285845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:31.515 [2024-12-09 10:22:02.285911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:31.515 [2024-12-09 10:22:02.285930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:31.515 [2024-12-09 10:22:02.285965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:31.515 [2024-12-09 10:22:02.285976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:31.515 [2024-12-09 10:22:02.286076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:31.515 [2024-12-09 10:22:02.286095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:31.515 [2024-12-09 10:22:02.286107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:31.515 [2024-12-09 10:22:02.286118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:31.515 [2024-12-09 10:22:02.286139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:31.515 [2024-12-09 10:22:02.286165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:31.515 [2024-12-09 10:22:02.286176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:31.515 [2024-12-09 10:22:02.286193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:31.775 [2024-12-09 10:22:02.429797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:31.775 [2024-12-09 10:22:02.429932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:31.775 [2024-12-09 10:22:02.429982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:31.775 [2024-12-09 10:22:02.429997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:31.775 [2024-12-09 10:22:02.540093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:31.775 [2024-12-09 10:22:02.540196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:31.775 [2024-12-09 10:22:02.540245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:31.775 [2024-12-09 10:22:02.540258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:31.775 [2024-12-09 10:22:02.540386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:31.775 [2024-12-09 10:22:02.540406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:31.775 [2024-12-09 10:22:02.540420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:31.775 [2024-12-09 10:22:02.540432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:31.775 [2024-12-09 10:22:02.540493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:31.775 [2024-12-09 10:22:02.540510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:31.775 [2024-12-09 10:22:02.540523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:31.775 [2024-12-09 10:22:02.540535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:31.775 [2024-12-09 10:22:02.540695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:31.775 [2024-12-09 10:22:02.540715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:31.775 [2024-12-09 10:22:02.540728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:31.775 [2024-12-09 10:22:02.540740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:31.775 [2024-12-09 10:22:02.540789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:31.775 [2024-12-09 10:22:02.540807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:31.775 [2024-12-09 10:22:02.540820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:31.775 [2024-12-09 10:22:02.540832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:31.775 [2024-12-09 10:22:02.540937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:31.775 [2024-12-09 10:22:02.540957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:31.775 [2024-12-09 10:22:02.540971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:31.775 [2024-12-09 10:22:02.540983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:31.775 [2024-12-09 10:22:02.541041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:31.775 [2024-12-09 10:22:02.541058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:31.775 [2024-12-09 10:22:02.541072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:31.775 [2024-12-09 10:22:02.541084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:31.775 [2024-12-09 10:22:02.541247] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 482.601 ms, result 0 00:30:33.152 00:30:33.152 00:30:33.152 10:22:03 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:35.052 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:30:35.052 10:22:05 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:30:35.052 [2024-12-09 10:22:05.707946] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:30:35.052 [2024-12-09 10:22:05.708148] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80801 ] 00:30:35.311 [2024-12-09 10:22:05.883554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:35.311 [2024-12-09 10:22:06.002982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:35.569 [2024-12-09 10:22:06.344241] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:35.569 [2024-12-09 10:22:06.344330] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:35.829 [2024-12-09 10:22:06.505696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.829 [2024-12-09 10:22:06.505756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:35.829 [2024-12-09 10:22:06.505792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:35.829 [2024-12-09 10:22:06.505803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.829 [2024-12-09 10:22:06.505896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.829 [2024-12-09 10:22:06.505919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:35.829 [2024-12-09 10:22:06.505931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:30:35.829 [2024-12-09 10:22:06.505942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.829 [2024-12-09 10:22:06.505987] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:35.829 [2024-12-09 10:22:06.507048] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:35.829 [2024-12-09 10:22:06.507090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.829 [2024-12-09 10:22:06.507104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:35.829 [2024-12-09 10:22:06.507116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.110 ms 00:30:35.829 [2024-12-09 10:22:06.507127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.829 [2024-12-09 10:22:06.509566] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:30:35.829 [2024-12-09 10:22:06.524064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.829 [2024-12-09 10:22:06.524102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:35.829 [2024-12-09 10:22:06.524134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.499 ms 00:30:35.829 [2024-12-09 10:22:06.524145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.829 [2024-12-09 10:22:06.524217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.829 [2024-12-09 10:22:06.524235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:35.829 [2024-12-09 10:22:06.524246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:30:35.829 [2024-12-09 10:22:06.524256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.829 [2024-12-09 10:22:06.534619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.829 [2024-12-09 10:22:06.534866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:35.829 [2024-12-09 10:22:06.534910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.281 ms 00:30:35.829 [2024-12-09 10:22:06.534953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.829 [2024-12-09 10:22:06.535082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.829 [2024-12-09 10:22:06.535103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:35.830 [2024-12-09 10:22:06.535117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:30:35.830 [2024-12-09 10:22:06.535128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.830 [2024-12-09 10:22:06.535224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.830 [2024-12-09 10:22:06.535242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:35.830 [2024-12-09 10:22:06.535254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:30:35.830 [2024-12-09 10:22:06.535264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.830 [2024-12-09 10:22:06.535320] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:35.830 [2024-12-09 10:22:06.540161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.830 [2024-12-09 10:22:06.540195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:35.830 [2024-12-09 10:22:06.540247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.865 ms 00:30:35.830 [2024-12-09 10:22:06.540258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.830 [2024-12-09 10:22:06.540296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.830 [2024-12-09 10:22:06.540310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:35.830 [2024-12-09 10:22:06.540322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:30:35.830 [2024-12-09 10:22:06.540331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.830 [2024-12-09 10:22:06.540377] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:35.830 [2024-12-09 10:22:06.540410] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:35.830 [2024-12-09 10:22:06.540447] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:35.830 [2024-12-09 10:22:06.540471] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:35.830 [2024-12-09 10:22:06.540566] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:35.830 [2024-12-09 10:22:06.540580] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:35.830 [2024-12-09 10:22:06.540593] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:35.830 [2024-12-09 10:22:06.540605] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:35.830 [2024-12-09 10:22:06.540617] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:35.830 [2024-12-09 10:22:06.540628] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:35.830 [2024-12-09 10:22:06.540638] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:35.830 [2024-12-09 10:22:06.540652] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:35.830 [2024-12-09 10:22:06.540662] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:35.830 [2024-12-09 10:22:06.540672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.830 [2024-12-09 10:22:06.540682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:35.830 [2024-12-09 10:22:06.540693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:30:35.830 [2024-12-09 10:22:06.540703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.830 [2024-12-09 10:22:06.540781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.830 [2024-12-09 10:22:06.540794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:35.830 [2024-12-09 10:22:06.540804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:30:35.830 [2024-12-09 10:22:06.540814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.830 [2024-12-09 10:22:06.540993] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:35.830 [2024-12-09 10:22:06.541031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:35.830 [2024-12-09 10:22:06.541043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:35.830 [2024-12-09 10:22:06.541054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:35.830 [2024-12-09 10:22:06.541065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:35.830 [2024-12-09 10:22:06.541075] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:35.830 [2024-12-09 10:22:06.541085] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:35.830 [2024-12-09 10:22:06.541097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:35.830 [2024-12-09 10:22:06.541106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:35.830 [2024-12-09 10:22:06.541116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:35.830 [2024-12-09 10:22:06.541125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:35.830 [2024-12-09 10:22:06.541135] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:35.830 [2024-12-09 10:22:06.541144] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:35.830 [2024-12-09 10:22:06.541181] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:35.830 [2024-12-09 10:22:06.541201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:35.830 [2024-12-09 10:22:06.541211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:35.830 [2024-12-09 10:22:06.541222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:35.830 [2024-12-09 10:22:06.541232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:35.830 [2024-12-09 10:22:06.541241] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:35.830 [2024-12-09 10:22:06.541251] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:35.830 [2024-12-09 10:22:06.541260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:35.830 [2024-12-09 10:22:06.541286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:35.830 [2024-12-09 10:22:06.541312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:35.830 [2024-12-09 10:22:06.541337] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:35.830 [2024-12-09 10:22:06.541346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:35.830 [2024-12-09 10:22:06.541356] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:35.830 [2024-12-09 10:22:06.541371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:35.830 [2024-12-09 10:22:06.541387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:35.830 [2024-12-09 10:22:06.541401] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:35.830 [2024-12-09 10:22:06.541411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:35.830 [2024-12-09 10:22:06.541422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:35.830 [2024-12-09 10:22:06.541438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:35.830 [2024-12-09 10:22:06.541458] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:35.830 [2024-12-09 10:22:06.541479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:35.830 [2024-12-09 10:22:06.541498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:35.830 [2024-12-09 10:22:06.541519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:35.830 [2024-12-09 10:22:06.541537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:35.830 [2024-12-09 10:22:06.541556] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:35.830 [2024-12-09 10:22:06.541575] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:35.830 [2024-12-09 10:22:06.541592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:35.830 [2024-12-09 10:22:06.541603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:35.830 [2024-12-09 10:22:06.541613] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:35.830 [2024-12-09 10:22:06.541623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:35.830 [2024-12-09 10:22:06.541633] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:35.830 [2024-12-09 10:22:06.541644] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:35.830 [2024-12-09 10:22:06.541656] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:35.830 [2024-12-09 10:22:06.541674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:35.830 [2024-12-09 10:22:06.541694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:35.830 [2024-12-09 10:22:06.541711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:35.830 [2024-12-09 10:22:06.541730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:35.830 [2024-12-09 10:22:06.541751] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:35.830 [2024-12-09 10:22:06.541771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:35.830 [2024-12-09 10:22:06.541790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:35.830 [2024-12-09 10:22:06.541810] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:35.830 [2024-12-09 10:22:06.541834] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:35.830 [2024-12-09 10:22:06.541864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:35.830 [2024-12-09 10:22:06.541887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:35.831 [2024-12-09 10:22:06.541906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:35.831 [2024-12-09 10:22:06.541948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:35.831 [2024-12-09 10:22:06.541965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:35.831 [2024-12-09 10:22:06.541976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:35.831 [2024-12-09 10:22:06.541987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:35.831 [2024-12-09 10:22:06.542001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:35.831 [2024-12-09 10:22:06.542018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:35.831 [2024-12-09 10:22:06.542039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:35.831 [2024-12-09 10:22:06.542060] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:35.831 [2024-12-09 10:22:06.542080] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:35.831 [2024-12-09 10:22:06.542101] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:35.831 [2024-12-09 10:22:06.542122] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:35.831 [2024-12-09 10:22:06.542150] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:35.831 [2024-12-09 10:22:06.542187] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:35.831 [2024-12-09 10:22:06.542201] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:35.831 [2024-12-09 10:22:06.542213] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:35.831 [2024-12-09 10:22:06.542224] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:35.831 [2024-12-09 10:22:06.542238] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:35.831 [2024-12-09 10:22:06.542260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.831 [2024-12-09 10:22:06.542282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:35.831 [2024-12-09 10:22:06.542297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.337 ms 00:30:35.831 [2024-12-09 10:22:06.542326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.831 [2024-12-09 10:22:06.582216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.831 [2024-12-09 10:22:06.582285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:35.831 [2024-12-09 10:22:06.582306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.756 ms 00:30:35.831 [2024-12-09 10:22:06.582324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.831 [2024-12-09 10:22:06.582441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.831 [2024-12-09 10:22:06.582456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:35.831 [2024-12-09 10:22:06.582484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:30:35.831 [2024-12-09 10:22:06.582511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:36.105 [2024-12-09 10:22:06.632969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:36.105 [2024-12-09 10:22:06.633279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:36.105 [2024-12-09 10:22:06.633435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.356 ms 00:30:36.105 [2024-12-09 10:22:06.633587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:36.105 [2024-12-09 10:22:06.633712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:36.105 [2024-12-09 10:22:06.633736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:36.105 [2024-12-09 10:22:06.633758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:36.105 [2024-12-09 10:22:06.633769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:36.105 [2024-12-09 10:22:06.634532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:36.105 [2024-12-09 10:22:06.634556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:36.105 [2024-12-09 10:22:06.634570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.611 ms 00:30:36.105 [2024-12-09 10:22:06.634580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:36.105 [2024-12-09 10:22:06.634844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:36.105 [2024-12-09 10:22:06.634903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:36.105 [2024-12-09 10:22:06.634942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.200 ms 00:30:36.105 [2024-12-09 10:22:06.634964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:36.105 [2024-12-09 10:22:06.653656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:36.105 [2024-12-09 10:22:06.653898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:36.105 [2024-12-09 10:22:06.653935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.645 ms 00:30:36.105 [2024-12-09 10:22:06.653961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:36.105 [2024-12-09 10:22:06.671565] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:36.105 [2024-12-09 10:22:06.671609] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:36.105 [2024-12-09 10:22:06.671629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:36.105 [2024-12-09 10:22:06.671657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:36.105 [2024-12-09 10:22:06.671670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.425 ms 00:30:36.105 [2024-12-09 10:22:06.671680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:36.105 [2024-12-09 10:22:06.699310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:36.105 [2024-12-09 10:22:06.699373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:36.105 [2024-12-09 10:22:06.699408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.563 ms 00:30:36.105 [2024-12-09 10:22:06.699419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:36.105 [2024-12-09 10:22:06.713754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:36.105 [2024-12-09 10:22:06.713792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:36.105 [2024-12-09 10:22:06.713823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.289 ms 00:30:36.105 [2024-12-09 10:22:06.713834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:36.105 [2024-12-09 10:22:06.728174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:36.105 [2024-12-09 10:22:06.728228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:36.105 [2024-12-09 10:22:06.728258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.271 ms 00:30:36.105 [2024-12-09 10:22:06.728269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:36.105 [2024-12-09 10:22:06.729231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:36.105 [2024-12-09 10:22:06.729271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:36.105 [2024-12-09 10:22:06.729294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.816 ms 00:30:36.105 [2024-12-09 10:22:06.729306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:36.105 [2024-12-09 10:22:06.799215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:36.105 [2024-12-09 10:22:06.799316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:36.105 [2024-12-09 10:22:06.799361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.878 ms 00:30:36.105 [2024-12-09 10:22:06.799372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:36.105 [2024-12-09 10:22:06.809832] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:36.105 [2024-12-09 10:22:06.813145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:36.105 [2024-12-09 10:22:06.813178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:36.105 [2024-12-09 10:22:06.813212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.699 ms 00:30:36.105 [2024-12-09 10:22:06.813223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:36.105 [2024-12-09 10:22:06.813339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:36.105 [2024-12-09 10:22:06.813358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:36.106 [2024-12-09 10:22:06.813375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:30:36.106 [2024-12-09 10:22:06.813386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:36.106 [2024-12-09 10:22:06.813489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:36.106 [2024-12-09 10:22:06.813506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:36.106 [2024-12-09 10:22:06.813517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:30:36.106 [2024-12-09 10:22:06.813528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:36.106 [2024-12-09 10:22:06.813559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:36.106 [2024-12-09 10:22:06.813573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:36.106 [2024-12-09 10:22:06.813585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:36.106 [2024-12-09 10:22:06.813595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:36.106 [2024-12-09 10:22:06.813645] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:36.106 [2024-12-09 10:22:06.813661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:36.106 [2024-12-09 10:22:06.813671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:36.106 [2024-12-09 10:22:06.813682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:30:36.106 [2024-12-09 10:22:06.813692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:36.106 [2024-12-09 10:22:06.841295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:36.106 [2024-12-09 10:22:06.841333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:36.106 [2024-12-09 10:22:06.841370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.578 ms 00:30:36.106 [2024-12-09 10:22:06.841382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:36.106 [2024-12-09 10:22:06.841458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:36.106 [2024-12-09 10:22:06.841476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:36.106 [2024-12-09 10:22:06.841488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:30:36.106 [2024-12-09 10:22:06.841498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:36.106 [2024-12-09 10:22:06.843229] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 336.888 ms, result 0 00:30:37.486  [2024-12-09T10:22:09.219Z] Copying: 22/1024 [MB] (22 MBps) [2024-12-09T10:22:10.156Z] Copying: 45/1024 [MB] (22 MBps) [2024-12-09T10:22:11.092Z] Copying: 67/1024 [MB] (22 MBps) [2024-12-09T10:22:12.029Z] Copying: 90/1024 [MB] (23 MBps) [2024-12-09T10:22:12.980Z] Copying: 114/1024 [MB] (23 MBps) [2024-12-09T10:22:13.943Z] Copying: 137/1024 [MB] (23 MBps) [2024-12-09T10:22:14.880Z] Copying: 161/1024 [MB] (23 MBps) [2024-12-09T10:22:16.255Z] Copying: 185/1024 [MB] (23 MBps) [2024-12-09T10:22:17.191Z] Copying: 210/1024 [MB] (24 MBps) [2024-12-09T10:22:18.128Z] Copying: 234/1024 [MB] (23 MBps) [2024-12-09T10:22:19.064Z] Copying: 258/1024 [MB] (24 MBps) [2024-12-09T10:22:20.000Z] Copying: 281/1024 [MB] (23 MBps) [2024-12-09T10:22:20.936Z] Copying: 307/1024 [MB] (25 MBps) [2024-12-09T10:22:21.873Z] Copying: 333/1024 [MB] (25 MBps) [2024-12-09T10:22:23.249Z] Copying: 360/1024 [MB] (27 MBps) [2024-12-09T10:22:24.185Z] Copying: 388/1024 [MB] (27 MBps) [2024-12-09T10:22:25.215Z] Copying: 415/1024 [MB] (27 MBps) [2024-12-09T10:22:26.152Z] Copying: 439/1024 [MB] (24 MBps) [2024-12-09T10:22:27.088Z] Copying: 466/1024 [MB] (26 MBps) [2024-12-09T10:22:28.031Z] Copying: 492/1024 [MB] (26 MBps) [2024-12-09T10:22:28.966Z] Copying: 517/1024 [MB] (25 MBps) [2024-12-09T10:22:29.901Z] Copying: 542/1024 [MB] (25 MBps) [2024-12-09T10:22:31.277Z] Copying: 567/1024 [MB] (24 MBps) [2024-12-09T10:22:31.856Z] Copying: 590/1024 [MB] (22 MBps) [2024-12-09T10:22:33.231Z] Copying: 615/1024 [MB] (25 MBps) [2024-12-09T10:22:34.167Z] Copying: 640/1024 [MB] (24 MBps) [2024-12-09T10:22:35.126Z] Copying: 664/1024 [MB] (23 MBps) [2024-12-09T10:22:36.059Z] Copying: 687/1024 [MB] (23 MBps) [2024-12-09T10:22:36.992Z] Copying: 712/1024 [MB] (24 MBps) [2024-12-09T10:22:37.927Z] Copying: 737/1024 [MB] (24 MBps) [2024-12-09T10:22:38.863Z] Copying: 762/1024 [MB] (25 MBps) [2024-12-09T10:22:40.240Z] Copying: 787/1024 [MB] (25 MBps) [2024-12-09T10:22:41.177Z] Copying: 812/1024 [MB] (24 MBps) [2024-12-09T10:22:42.126Z] Copying: 836/1024 [MB] (24 MBps) [2024-12-09T10:22:43.061Z] Copying: 863/1024 [MB] (26 MBps) [2024-12-09T10:22:43.997Z] Copying: 888/1024 [MB] (25 MBps) [2024-12-09T10:22:44.932Z] Copying: 912/1024 [MB] (24 MBps) [2024-12-09T10:22:45.867Z] Copying: 937/1024 [MB] (25 MBps) [2024-12-09T10:22:47.254Z] Copying: 962/1024 [MB] (24 MBps) [2024-12-09T10:22:48.190Z] Copying: 986/1024 [MB] (23 MBps) [2024-12-09T10:22:49.126Z] Copying: 1009/1024 [MB] (23 MBps) [2024-12-09T10:22:49.385Z] Copying: 1023/1024 [MB] (13 MBps) [2024-12-09T10:22:49.385Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-12-09 10:22:49.290429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.588 [2024-12-09 10:22:49.290539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:18.588 [2024-12-09 10:22:49.290570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:18.588 [2024-12-09 10:22:49.290582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.588 [2024-12-09 10:22:49.291331] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:18.588 [2024-12-09 10:22:49.296446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.588 [2024-12-09 10:22:49.296487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:18.588 [2024-12-09 10:22:49.296503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.049 ms 00:31:18.588 [2024-12-09 10:22:49.296513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.588 [2024-12-09 10:22:49.308032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.588 [2024-12-09 10:22:49.308073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:18.588 [2024-12-09 10:22:49.308090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.885 ms 00:31:18.588 [2024-12-09 10:22:49.308109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.588 [2024-12-09 10:22:49.330743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.588 [2024-12-09 10:22:49.330822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:18.588 [2024-12-09 10:22:49.330879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.597 ms 00:31:18.588 [2024-12-09 10:22:49.330891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.588 [2024-12-09 10:22:49.336684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.588 [2024-12-09 10:22:49.336745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:18.588 [2024-12-09 10:22:49.336759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.756 ms 00:31:18.588 [2024-12-09 10:22:49.336778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.588 [2024-12-09 10:22:49.367681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.588 [2024-12-09 10:22:49.367722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:18.588 [2024-12-09 10:22:49.367738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.826 ms 00:31:18.588 [2024-12-09 10:22:49.367749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.848 [2024-12-09 10:22:49.386133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.848 [2024-12-09 10:22:49.386172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:18.848 [2024-12-09 10:22:49.386191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.342 ms 00:31:18.848 [2024-12-09 10:22:49.386202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.848 [2024-12-09 10:22:49.493563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.848 [2024-12-09 10:22:49.493612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:18.848 [2024-12-09 10:22:49.493630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 107.316 ms 00:31:18.848 [2024-12-09 10:22:49.493641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.848 [2024-12-09 10:22:49.519058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.848 [2024-12-09 10:22:49.519096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:31:18.848 [2024-12-09 10:22:49.519117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.398 ms 00:31:18.848 [2024-12-09 10:22:49.519128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.848 [2024-12-09 10:22:49.543248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.848 [2024-12-09 10:22:49.543302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:31:18.848 [2024-12-09 10:22:49.543323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.082 ms 00:31:18.848 [2024-12-09 10:22:49.543333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.848 [2024-12-09 10:22:49.567241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.848 [2024-12-09 10:22:49.567278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:18.848 [2024-12-09 10:22:49.567294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.869 ms 00:31:18.849 [2024-12-09 10:22:49.567303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.849 [2024-12-09 10:22:49.591608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.849 [2024-12-09 10:22:49.591647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:18.849 [2024-12-09 10:22:49.591663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.243 ms 00:31:18.849 [2024-12-09 10:22:49.591673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.849 [2024-12-09 10:22:49.591714] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:18.849 [2024-12-09 10:22:49.591750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 116480 / 261120 wr_cnt: 1 state: open 00:31:18.849 [2024-12-09 10:22:49.591764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.591775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.591785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.591796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.591807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.591817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.591827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.591865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.591880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.591890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.591901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.591911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.591924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.591935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.591946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.591957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.591968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.591978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.591988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.591999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.592010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.592020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.592031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.592042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.592052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.592062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.592072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.592082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.592093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.592103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.592118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.592134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.592168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.592178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.592187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.592196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.592206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.592216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.592242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.592262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.592272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.592287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.592297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.592308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.592318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.592328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.592340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.592350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.592360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.592370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.592380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.592390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.592400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.592411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.592420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.592431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.592441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.592451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.592461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.592471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.592481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.592492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.592503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.592514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.592524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.592550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.592560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.592570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.592580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.592590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.592611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.592625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.592636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.592646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.592657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.592667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.592678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.592689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.592699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.592710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.592721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.592732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.592742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:18.849 [2024-12-09 10:22:49.592753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:18.850 [2024-12-09 10:22:49.592763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:18.850 [2024-12-09 10:22:49.592774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:18.850 [2024-12-09 10:22:49.592784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:18.850 [2024-12-09 10:22:49.592795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:18.850 [2024-12-09 10:22:49.592805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:18.850 [2024-12-09 10:22:49.592815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:18.850 [2024-12-09 10:22:49.592825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:18.850 [2024-12-09 10:22:49.592835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:18.850 [2024-12-09 10:22:49.592846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:18.850 [2024-12-09 10:22:49.592870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:18.850 [2024-12-09 10:22:49.592881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:18.850 [2024-12-09 10:22:49.592919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:18.850 [2024-12-09 10:22:49.592931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:18.850 [2024-12-09 10:22:49.592951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:18.850 [2024-12-09 10:22:49.592960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:18.850 [2024-12-09 10:22:49.592978] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:18.850 [2024-12-09 10:22:49.592988] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c37a29d1-b9ea-48f7-b142-8cb3a1c0b0d7 00:31:18.850 [2024-12-09 10:22:49.592999] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 116480 00:31:18.850 [2024-12-09 10:22:49.593008] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 117440 00:31:18.850 [2024-12-09 10:22:49.593017] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 116480 00:31:18.850 [2024-12-09 10:22:49.593028] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0082 00:31:18.850 [2024-12-09 10:22:49.593053] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:18.850 [2024-12-09 10:22:49.593064] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:18.850 [2024-12-09 10:22:49.593076] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:18.850 [2024-12-09 10:22:49.593084] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:18.850 [2024-12-09 10:22:49.593093] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:18.850 [2024-12-09 10:22:49.593102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.850 [2024-12-09 10:22:49.593127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:18.850 [2024-12-09 10:22:49.593137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.390 ms 00:31:18.850 [2024-12-09 10:22:49.593149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.850 [2024-12-09 10:22:49.608497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.850 [2024-12-09 10:22:49.608550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:18.850 [2024-12-09 10:22:49.608582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.327 ms 00:31:18.850 [2024-12-09 10:22:49.608594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.850 [2024-12-09 10:22:49.609162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.850 [2024-12-09 10:22:49.609186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:18.850 [2024-12-09 10:22:49.609201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.522 ms 00:31:18.850 [2024-12-09 10:22:49.609213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.110 [2024-12-09 10:22:49.653631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:19.110 [2024-12-09 10:22:49.653704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:19.110 [2024-12-09 10:22:49.653737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:19.110 [2024-12-09 10:22:49.653748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.110 [2024-12-09 10:22:49.653822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:19.110 [2024-12-09 10:22:49.653838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:19.110 [2024-12-09 10:22:49.653849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:19.110 [2024-12-09 10:22:49.653903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.110 [2024-12-09 10:22:49.653985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:19.110 [2024-12-09 10:22:49.654009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:19.110 [2024-12-09 10:22:49.654021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:19.110 [2024-12-09 10:22:49.654031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.110 [2024-12-09 10:22:49.654054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:19.110 [2024-12-09 10:22:49.654068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:19.110 [2024-12-09 10:22:49.654079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:19.110 [2024-12-09 10:22:49.654089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.110 [2024-12-09 10:22:49.747247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:19.110 [2024-12-09 10:22:49.747325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:19.110 [2024-12-09 10:22:49.747349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:19.110 [2024-12-09 10:22:49.747359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.110 [2024-12-09 10:22:49.820755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:19.110 [2024-12-09 10:22:49.820840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:19.110 [2024-12-09 10:22:49.820861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:19.110 [2024-12-09 10:22:49.820888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.110 [2024-12-09 10:22:49.821023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:19.110 [2024-12-09 10:22:49.821042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:19.110 [2024-12-09 10:22:49.821054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:19.110 [2024-12-09 10:22:49.821070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.110 [2024-12-09 10:22:49.821136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:19.110 [2024-12-09 10:22:49.821157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:19.110 [2024-12-09 10:22:49.821173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:19.110 [2024-12-09 10:22:49.821183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.110 [2024-12-09 10:22:49.821305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:19.110 [2024-12-09 10:22:49.821324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:19.110 [2024-12-09 10:22:49.821336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:19.110 [2024-12-09 10:22:49.821353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.110 [2024-12-09 10:22:49.821398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:19.110 [2024-12-09 10:22:49.821416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:19.110 [2024-12-09 10:22:49.821427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:19.110 [2024-12-09 10:22:49.821438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.110 [2024-12-09 10:22:49.821483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:19.110 [2024-12-09 10:22:49.821497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:19.110 [2024-12-09 10:22:49.821508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:19.110 [2024-12-09 10:22:49.821518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.110 [2024-12-09 10:22:49.821594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:19.110 [2024-12-09 10:22:49.821610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:19.110 [2024-12-09 10:22:49.821622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:19.110 [2024-12-09 10:22:49.821632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.110 [2024-12-09 10:22:49.821787] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 532.517 ms, result 0 00:31:21.012 00:31:21.012 00:31:21.012 10:22:51 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:31:21.012 [2024-12-09 10:22:51.495477] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:31:21.012 [2024-12-09 10:22:51.495679] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81259 ] 00:31:21.012 [2024-12-09 10:22:51.675719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:21.271 [2024-12-09 10:22:51.810361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:21.530 [2024-12-09 10:22:52.199524] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:21.530 [2024-12-09 10:22:52.199658] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:21.789 [2024-12-09 10:22:52.365449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.789 [2024-12-09 10:22:52.365535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:21.789 [2024-12-09 10:22:52.365576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:21.789 [2024-12-09 10:22:52.365590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.789 [2024-12-09 10:22:52.365657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.789 [2024-12-09 10:22:52.365681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:21.789 [2024-12-09 10:22:52.365695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:31:21.789 [2024-12-09 10:22:52.365707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.789 [2024-12-09 10:22:52.365739] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:21.789 [2024-12-09 10:22:52.366685] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:21.789 [2024-12-09 10:22:52.366730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.789 [2024-12-09 10:22:52.366745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:21.789 [2024-12-09 10:22:52.366759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.998 ms 00:31:21.789 [2024-12-09 10:22:52.366772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.789 [2024-12-09 10:22:52.369366] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:31:21.789 [2024-12-09 10:22:52.387388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.789 [2024-12-09 10:22:52.387447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:21.789 [2024-12-09 10:22:52.387491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.020 ms 00:31:21.789 [2024-12-09 10:22:52.387527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.789 [2024-12-09 10:22:52.387641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.789 [2024-12-09 10:22:52.387671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:21.789 [2024-12-09 10:22:52.387692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:31:21.789 [2024-12-09 10:22:52.387711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.789 [2024-12-09 10:22:52.398791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.789 [2024-12-09 10:22:52.398866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:21.789 [2024-12-09 10:22:52.398898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.911 ms 00:31:21.789 [2024-12-09 10:22:52.398912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.789 [2024-12-09 10:22:52.399036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.789 [2024-12-09 10:22:52.399064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:21.789 [2024-12-09 10:22:52.399078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:31:21.789 [2024-12-09 10:22:52.399091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.789 [2024-12-09 10:22:52.399197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.789 [2024-12-09 10:22:52.399218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:21.789 [2024-12-09 10:22:52.399232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:31:21.789 [2024-12-09 10:22:52.399251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.789 [2024-12-09 10:22:52.399293] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:21.789 [2024-12-09 10:22:52.404954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.789 [2024-12-09 10:22:52.405001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:21.789 [2024-12-09 10:22:52.405020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.674 ms 00:31:21.789 [2024-12-09 10:22:52.405033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.789 [2024-12-09 10:22:52.405079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.789 [2024-12-09 10:22:52.405097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:21.789 [2024-12-09 10:22:52.405111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:31:21.789 [2024-12-09 10:22:52.405123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.789 [2024-12-09 10:22:52.405175] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:21.789 [2024-12-09 10:22:52.405214] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:21.789 [2024-12-09 10:22:52.405264] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:21.789 [2024-12-09 10:22:52.405288] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:21.789 [2024-12-09 10:22:52.405402] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:21.789 [2024-12-09 10:22:52.405418] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:21.789 [2024-12-09 10:22:52.405434] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:21.790 [2024-12-09 10:22:52.405456] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:21.790 [2024-12-09 10:22:52.405470] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:21.790 [2024-12-09 10:22:52.405484] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:21.790 [2024-12-09 10:22:52.405501] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:21.790 [2024-12-09 10:22:52.405513] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:21.790 [2024-12-09 10:22:52.405526] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:21.790 [2024-12-09 10:22:52.405539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.790 [2024-12-09 10:22:52.405552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:21.790 [2024-12-09 10:22:52.405564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.368 ms 00:31:21.790 [2024-12-09 10:22:52.405576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.790 [2024-12-09 10:22:52.405678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.790 [2024-12-09 10:22:52.405696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:21.790 [2024-12-09 10:22:52.405712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:31:21.790 [2024-12-09 10:22:52.405724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.790 [2024-12-09 10:22:52.405877] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:21.790 [2024-12-09 10:22:52.405903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:21.790 [2024-12-09 10:22:52.405916] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:21.790 [2024-12-09 10:22:52.405929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:21.790 [2024-12-09 10:22:52.405942] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:21.790 [2024-12-09 10:22:52.405953] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:21.790 [2024-12-09 10:22:52.405964] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:21.790 [2024-12-09 10:22:52.405975] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:21.790 [2024-12-09 10:22:52.405986] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:21.790 [2024-12-09 10:22:52.406005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:21.790 [2024-12-09 10:22:52.406016] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:21.790 [2024-12-09 10:22:52.406026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:21.790 [2024-12-09 10:22:52.406036] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:21.790 [2024-12-09 10:22:52.406062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:21.790 [2024-12-09 10:22:52.406074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:21.790 [2024-12-09 10:22:52.406087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:21.790 [2024-12-09 10:22:52.406099] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:21.790 [2024-12-09 10:22:52.406110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:21.790 [2024-12-09 10:22:52.406121] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:21.790 [2024-12-09 10:22:52.406132] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:21.790 [2024-12-09 10:22:52.406144] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:21.790 [2024-12-09 10:22:52.406155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:21.790 [2024-12-09 10:22:52.406167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:21.790 [2024-12-09 10:22:52.406178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:21.790 [2024-12-09 10:22:52.406189] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:21.790 [2024-12-09 10:22:52.406199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:21.790 [2024-12-09 10:22:52.406210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:21.790 [2024-12-09 10:22:52.406220] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:21.790 [2024-12-09 10:22:52.406231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:21.790 [2024-12-09 10:22:52.406242] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:21.790 [2024-12-09 10:22:52.406253] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:21.790 [2024-12-09 10:22:52.406264] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:21.790 [2024-12-09 10:22:52.406276] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:21.790 [2024-12-09 10:22:52.406286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:21.790 [2024-12-09 10:22:52.406297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:21.790 [2024-12-09 10:22:52.406307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:21.790 [2024-12-09 10:22:52.406319] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:21.790 [2024-12-09 10:22:52.406331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:21.790 [2024-12-09 10:22:52.406343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:21.790 [2024-12-09 10:22:52.406355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:21.790 [2024-12-09 10:22:52.406366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:21.790 [2024-12-09 10:22:52.406377] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:21.790 [2024-12-09 10:22:52.406388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:21.790 [2024-12-09 10:22:52.406399] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:21.790 [2024-12-09 10:22:52.406412] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:21.790 [2024-12-09 10:22:52.406424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:21.790 [2024-12-09 10:22:52.406436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:21.790 [2024-12-09 10:22:52.406451] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:21.790 [2024-12-09 10:22:52.406462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:21.790 [2024-12-09 10:22:52.406474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:21.790 [2024-12-09 10:22:52.406486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:21.790 [2024-12-09 10:22:52.406497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:21.790 [2024-12-09 10:22:52.406508] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:21.790 [2024-12-09 10:22:52.406522] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:21.790 [2024-12-09 10:22:52.406557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:21.790 [2024-12-09 10:22:52.406577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:21.790 [2024-12-09 10:22:52.406589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:21.790 [2024-12-09 10:22:52.406601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:21.790 [2024-12-09 10:22:52.406613] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:21.790 [2024-12-09 10:22:52.406625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:21.790 [2024-12-09 10:22:52.406636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:21.790 [2024-12-09 10:22:52.406648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:21.790 [2024-12-09 10:22:52.406659] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:21.790 [2024-12-09 10:22:52.406671] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:21.790 [2024-12-09 10:22:52.406682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:21.790 [2024-12-09 10:22:52.406693] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:21.790 [2024-12-09 10:22:52.406704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:21.790 [2024-12-09 10:22:52.406715] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:21.790 [2024-12-09 10:22:52.406727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:21.790 [2024-12-09 10:22:52.406739] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:21.790 [2024-12-09 10:22:52.406753] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:21.790 [2024-12-09 10:22:52.406767] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:21.790 [2024-12-09 10:22:52.406780] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:21.790 [2024-12-09 10:22:52.406791] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:21.790 [2024-12-09 10:22:52.406803] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:21.790 [2024-12-09 10:22:52.406816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.790 [2024-12-09 10:22:52.406851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:21.790 [2024-12-09 10:22:52.406868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.031 ms 00:31:21.790 [2024-12-09 10:22:52.406880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.790 [2024-12-09 10:22:52.451620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.790 [2024-12-09 10:22:52.451697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:21.790 [2024-12-09 10:22:52.451759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.661 ms 00:31:21.790 [2024-12-09 10:22:52.451772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.790 [2024-12-09 10:22:52.451935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.790 [2024-12-09 10:22:52.451955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:21.790 [2024-12-09 10:22:52.451970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:31:21.790 [2024-12-09 10:22:52.452005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.791 [2024-12-09 10:22:52.507533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.791 [2024-12-09 10:22:52.507615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:21.791 [2024-12-09 10:22:52.507672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.418 ms 00:31:21.791 [2024-12-09 10:22:52.507685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.791 [2024-12-09 10:22:52.507773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.791 [2024-12-09 10:22:52.507798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:21.791 [2024-12-09 10:22:52.507813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:21.791 [2024-12-09 10:22:52.507824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.791 [2024-12-09 10:22:52.508746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.791 [2024-12-09 10:22:52.508775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:21.791 [2024-12-09 10:22:52.508791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.752 ms 00:31:21.791 [2024-12-09 10:22:52.508804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.791 [2024-12-09 10:22:52.509032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.791 [2024-12-09 10:22:52.509062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:21.791 [2024-12-09 10:22:52.509076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.171 ms 00:31:21.791 [2024-12-09 10:22:52.509088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.791 [2024-12-09 10:22:52.529606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.791 [2024-12-09 10:22:52.529661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:21.791 [2024-12-09 10:22:52.529698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.486 ms 00:31:21.791 [2024-12-09 10:22:52.529711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.791 [2024-12-09 10:22:52.547023] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:31:21.791 [2024-12-09 10:22:52.547222] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:21.791 [2024-12-09 10:22:52.547263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.791 [2024-12-09 10:22:52.547278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:21.791 [2024-12-09 10:22:52.547292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.335 ms 00:31:21.791 [2024-12-09 10:22:52.547304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.791 [2024-12-09 10:22:52.576820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.791 [2024-12-09 10:22:52.576900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:21.791 [2024-12-09 10:22:52.576937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.463 ms 00:31:21.791 [2024-12-09 10:22:52.576964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.050 [2024-12-09 10:22:52.592460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.050 [2024-12-09 10:22:52.592501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:22.050 [2024-12-09 10:22:52.592535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.422 ms 00:31:22.050 [2024-12-09 10:22:52.592546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.050 [2024-12-09 10:22:52.608045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.050 [2024-12-09 10:22:52.608121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:22.050 [2024-12-09 10:22:52.608139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.454 ms 00:31:22.050 [2024-12-09 10:22:52.608151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.050 [2024-12-09 10:22:52.609180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.050 [2024-12-09 10:22:52.609223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:22.050 [2024-12-09 10:22:52.609240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.882 ms 00:31:22.050 [2024-12-09 10:22:52.609253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.050 [2024-12-09 10:22:52.694559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.050 [2024-12-09 10:22:52.694662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:22.050 [2024-12-09 10:22:52.694687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.252 ms 00:31:22.050 [2024-12-09 10:22:52.694701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.050 [2024-12-09 10:22:52.708127] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:22.050 [2024-12-09 10:22:52.713153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.050 [2024-12-09 10:22:52.713194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:22.050 [2024-12-09 10:22:52.713214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.366 ms 00:31:22.050 [2024-12-09 10:22:52.713227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.050 [2024-12-09 10:22:52.713366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.050 [2024-12-09 10:22:52.713389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:22.050 [2024-12-09 10:22:52.713410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:31:22.050 [2024-12-09 10:22:52.713423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.050 [2024-12-09 10:22:52.715564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.050 [2024-12-09 10:22:52.715740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:22.050 [2024-12-09 10:22:52.715771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.051 ms 00:31:22.050 [2024-12-09 10:22:52.715784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.050 [2024-12-09 10:22:52.715851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.050 [2024-12-09 10:22:52.715873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:22.050 [2024-12-09 10:22:52.715887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:31:22.050 [2024-12-09 10:22:52.715908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.050 [2024-12-09 10:22:52.715958] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:22.050 [2024-12-09 10:22:52.715984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.050 [2024-12-09 10:22:52.715997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:22.050 [2024-12-09 10:22:52.716010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:31:22.050 [2024-12-09 10:22:52.716022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.050 [2024-12-09 10:22:52.748115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.050 [2024-12-09 10:22:52.748160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:22.050 [2024-12-09 10:22:52.748201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.061 ms 00:31:22.050 [2024-12-09 10:22:52.748214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.050 [2024-12-09 10:22:52.748315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.050 [2024-12-09 10:22:52.748334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:22.050 [2024-12-09 10:22:52.748347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:31:22.050 [2024-12-09 10:22:52.748358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.050 [2024-12-09 10:22:52.749944] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 383.779 ms, result 0 00:31:23.427  [2024-12-09T10:22:55.160Z] Copying: 17/1024 [MB] (17 MBps) [2024-12-09T10:22:56.116Z] Copying: 40/1024 [MB] (22 MBps) [2024-12-09T10:22:57.085Z] Copying: 62/1024 [MB] (22 MBps) [2024-12-09T10:22:58.020Z] Copying: 86/1024 [MB] (23 MBps) [2024-12-09T10:22:59.396Z] Copying: 109/1024 [MB] (23 MBps) [2024-12-09T10:22:59.962Z] Copying: 132/1024 [MB] (22 MBps) [2024-12-09T10:23:01.338Z] Copying: 155/1024 [MB] (22 MBps) [2024-12-09T10:23:02.271Z] Copying: 178/1024 [MB] (23 MBps) [2024-12-09T10:23:03.206Z] Copying: 203/1024 [MB] (25 MBps) [2024-12-09T10:23:04.141Z] Copying: 226/1024 [MB] (22 MBps) [2024-12-09T10:23:05.078Z] Copying: 248/1024 [MB] (21 MBps) [2024-12-09T10:23:06.051Z] Copying: 271/1024 [MB] (22 MBps) [2024-12-09T10:23:06.989Z] Copying: 292/1024 [MB] (21 MBps) [2024-12-09T10:23:08.366Z] Copying: 312/1024 [MB] (20 MBps) [2024-12-09T10:23:09.254Z] Copying: 333/1024 [MB] (20 MBps) [2024-12-09T10:23:10.187Z] Copying: 353/1024 [MB] (19 MBps) [2024-12-09T10:23:11.121Z] Copying: 373/1024 [MB] (20 MBps) [2024-12-09T10:23:12.056Z] Copying: 394/1024 [MB] (21 MBps) [2024-12-09T10:23:12.990Z] Copying: 415/1024 [MB] (21 MBps) [2024-12-09T10:23:14.366Z] Copying: 436/1024 [MB] (20 MBps) [2024-12-09T10:23:15.302Z] Copying: 456/1024 [MB] (19 MBps) [2024-12-09T10:23:16.243Z] Copying: 475/1024 [MB] (19 MBps) [2024-12-09T10:23:17.179Z] Copying: 495/1024 [MB] (19 MBps) [2024-12-09T10:23:18.116Z] Copying: 514/1024 [MB] (19 MBps) [2024-12-09T10:23:19.053Z] Copying: 534/1024 [MB] (19 MBps) [2024-12-09T10:23:19.990Z] Copying: 553/1024 [MB] (19 MBps) [2024-12-09T10:23:21.366Z] Copying: 572/1024 [MB] (19 MBps) [2024-12-09T10:23:22.310Z] Copying: 592/1024 [MB] (19 MBps) [2024-12-09T10:23:23.245Z] Copying: 611/1024 [MB] (19 MBps) [2024-12-09T10:23:24.179Z] Copying: 631/1024 [MB] (19 MBps) [2024-12-09T10:23:25.115Z] Copying: 651/1024 [MB] (20 MBps) [2024-12-09T10:23:26.076Z] Copying: 671/1024 [MB] (19 MBps) [2024-12-09T10:23:27.011Z] Copying: 691/1024 [MB] (20 MBps) [2024-12-09T10:23:28.389Z] Copying: 711/1024 [MB] (19 MBps) [2024-12-09T10:23:29.326Z] Copying: 731/1024 [MB] (20 MBps) [2024-12-09T10:23:30.288Z] Copying: 751/1024 [MB] (19 MBps) [2024-12-09T10:23:31.226Z] Copying: 771/1024 [MB] (19 MBps) [2024-12-09T10:23:32.164Z] Copying: 791/1024 [MB] (20 MBps) [2024-12-09T10:23:33.102Z] Copying: 810/1024 [MB] (19 MBps) [2024-12-09T10:23:34.039Z] Copying: 830/1024 [MB] (20 MBps) [2024-12-09T10:23:34.976Z] Copying: 850/1024 [MB] (20 MBps) [2024-12-09T10:23:36.353Z] Copying: 870/1024 [MB] (19 MBps) [2024-12-09T10:23:37.289Z] Copying: 890/1024 [MB] (19 MBps) [2024-12-09T10:23:38.226Z] Copying: 911/1024 [MB] (21 MBps) [2024-12-09T10:23:39.163Z] Copying: 933/1024 [MB] (21 MBps) [2024-12-09T10:23:40.101Z] Copying: 954/1024 [MB] (21 MBps) [2024-12-09T10:23:41.039Z] Copying: 976/1024 [MB] (21 MBps) [2024-12-09T10:23:41.975Z] Copying: 997/1024 [MB] (21 MBps) [2024-12-09T10:23:42.234Z] Copying: 1018/1024 [MB] (21 MBps) [2024-12-09T10:23:42.803Z] Copying: 1024/1024 [MB] (average 20 MBps)[2024-12-09 10:23:42.558537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.006 [2024-12-09 10:23:42.558674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:12.006 [2024-12-09 10:23:42.558739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:32:12.006 [2024-12-09 10:23:42.558755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.006 [2024-12-09 10:23:42.558796] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:12.006 [2024-12-09 10:23:42.563424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.006 [2024-12-09 10:23:42.564143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:12.006 [2024-12-09 10:23:42.564317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.595 ms 00:32:12.006 [2024-12-09 10:23:42.564383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.006 [2024-12-09 10:23:42.564907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.006 [2024-12-09 10:23:42.565267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:12.006 [2024-12-09 10:23:42.565457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.330 ms 00:32:12.006 [2024-12-09 10:23:42.565521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.006 [2024-12-09 10:23:42.570198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.006 [2024-12-09 10:23:42.570385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:32:12.006 [2024-12-09 10:23:42.570527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.525 ms 00:32:12.006 [2024-12-09 10:23:42.570557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.006 [2024-12-09 10:23:42.577882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.006 [2024-12-09 10:23:42.578085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:32:12.006 [2024-12-09 10:23:42.578244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.250 ms 00:32:12.006 [2024-12-09 10:23:42.578312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.006 [2024-12-09 10:23:42.608749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.006 [2024-12-09 10:23:42.609005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:32:12.007 [2024-12-09 10:23:42.609139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.095 ms 00:32:12.007 [2024-12-09 10:23:42.609205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.007 [2024-12-09 10:23:42.627018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.007 [2024-12-09 10:23:42.627232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:32:12.007 [2024-12-09 10:23:42.627346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.532 ms 00:32:12.007 [2024-12-09 10:23:42.627400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.007 [2024-12-09 10:23:42.759425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.007 [2024-12-09 10:23:42.759616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:32:12.007 [2024-12-09 10:23:42.759751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 131.927 ms 00:32:12.007 [2024-12-09 10:23:42.759808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.007 [2024-12-09 10:23:42.788169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.007 [2024-12-09 10:23:42.788356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:32:12.007 [2024-12-09 10:23:42.788471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.019 ms 00:32:12.007 [2024-12-09 10:23:42.788523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.267 [2024-12-09 10:23:42.815346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.267 [2024-12-09 10:23:42.815524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:32:12.267 [2024-12-09 10:23:42.815640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.745 ms 00:32:12.267 [2024-12-09 10:23:42.815692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.267 [2024-12-09 10:23:42.839996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.267 [2024-12-09 10:23:42.840174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:32:12.267 [2024-12-09 10:23:42.840204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.227 ms 00:32:12.267 [2024-12-09 10:23:42.840217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.267 [2024-12-09 10:23:42.864253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.267 [2024-12-09 10:23:42.864296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:32:12.267 [2024-12-09 10:23:42.864315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.944 ms 00:32:12.267 [2024-12-09 10:23:42.864326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.267 [2024-12-09 10:23:42.864368] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:12.267 [2024-12-09 10:23:42.864394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:32:12.267 [2024-12-09 10:23:42.864409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:12.267 [2024-12-09 10:23:42.864421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:12.267 [2024-12-09 10:23:42.864434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:12.267 [2024-12-09 10:23:42.864446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:12.267 [2024-12-09 10:23:42.864458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:12.267 [2024-12-09 10:23:42.864470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:12.267 [2024-12-09 10:23:42.864482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:12.267 [2024-12-09 10:23:42.864494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:12.267 [2024-12-09 10:23:42.864506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:12.267 [2024-12-09 10:23:42.864518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:12.267 [2024-12-09 10:23:42.864530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:12.267 [2024-12-09 10:23:42.864542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:12.267 [2024-12-09 10:23:42.864554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:12.267 [2024-12-09 10:23:42.864566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:12.267 [2024-12-09 10:23:42.864578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:12.267 [2024-12-09 10:23:42.864590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:12.267 [2024-12-09 10:23:42.864601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:12.267 [2024-12-09 10:23:42.864613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:12.267 [2024-12-09 10:23:42.864624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:12.267 [2024-12-09 10:23:42.864635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:12.267 [2024-12-09 10:23:42.864647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:12.267 [2024-12-09 10:23:42.864658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:12.267 [2024-12-09 10:23:42.864669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:12.267 [2024-12-09 10:23:42.864680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:12.267 [2024-12-09 10:23:42.864691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:12.267 [2024-12-09 10:23:42.864704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:12.267 [2024-12-09 10:23:42.864716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:12.268 [2024-12-09 10:23:42.864727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:12.268 [2024-12-09 10:23:42.864739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:12.268 [2024-12-09 10:23:42.864751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:12.268 [2024-12-09 10:23:42.864762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:12.268 [2024-12-09 10:23:42.864775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:12.268 [2024-12-09 10:23:42.864787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:12.268 [2024-12-09 10:23:42.864799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:12.268 [2024-12-09 10:23:42.864811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:12.268 [2024-12-09 10:23:42.864823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:12.268 [2024-12-09 10:23:42.864865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:12.268 [2024-12-09 10:23:42.864879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:12.268 [2024-12-09 10:23:42.864890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:12.268 [2024-12-09 10:23:42.864902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:12.268 [2024-12-09 10:23:42.864914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:12.268 [2024-12-09 10:23:42.864926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:12.268 [2024-12-09 10:23:42.864939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:12.268 [2024-12-09 10:23:42.864951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:12.268 [2024-12-09 10:23:42.864962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:12.268 [2024-12-09 10:23:42.864974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:12.268 [2024-12-09 10:23:42.864986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:12.268 [2024-12-09 10:23:42.865000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:12.268 [2024-12-09 10:23:42.865012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:12.268 [2024-12-09 10:23:42.865023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:12.268 [2024-12-09 10:23:42.865034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:12.268 [2024-12-09 10:23:42.865046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:12.268 [2024-12-09 10:23:42.865058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:12.268 [2024-12-09 10:23:42.865069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:12.268 [2024-12-09 10:23:42.865081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:12.268 [2024-12-09 10:23:42.865092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:12.268 [2024-12-09 10:23:42.865103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:12.268 [2024-12-09 10:23:42.865114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:12.268 [2024-12-09 10:23:42.865126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:12.268 [2024-12-09 10:23:42.865137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:12.268 [2024-12-09 10:23:42.865149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:12.268 [2024-12-09 10:23:42.865160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:12.268 [2024-12-09 10:23:42.865188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:12.268 [2024-12-09 10:23:42.865202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:12.268 [2024-12-09 10:23:42.865215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:12.268 [2024-12-09 10:23:42.865227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:12.268 [2024-12-09 10:23:42.865239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:12.268 [2024-12-09 10:23:42.865252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:12.268 [2024-12-09 10:23:42.865265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:12.268 [2024-12-09 10:23:42.865277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:12.268 [2024-12-09 10:23:42.865290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:12.268 [2024-12-09 10:23:42.865302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:12.268 [2024-12-09 10:23:42.865315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:12.268 [2024-12-09 10:23:42.865328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:12.268 [2024-12-09 10:23:42.865341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:12.268 [2024-12-09 10:23:42.865352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:12.268 [2024-12-09 10:23:42.865365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:12.268 [2024-12-09 10:23:42.865377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:12.268 [2024-12-09 10:23:42.865389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:12.268 [2024-12-09 10:23:42.865401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:12.268 [2024-12-09 10:23:42.865414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:12.268 [2024-12-09 10:23:42.865426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:12.268 [2024-12-09 10:23:42.865437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:12.268 [2024-12-09 10:23:42.865448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:12.268 [2024-12-09 10:23:42.865460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:12.268 [2024-12-09 10:23:42.865472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:12.268 [2024-12-09 10:23:42.865483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:12.268 [2024-12-09 10:23:42.865494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:12.268 [2024-12-09 10:23:42.865506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:12.268 [2024-12-09 10:23:42.865517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:12.268 [2024-12-09 10:23:42.865529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:12.268 [2024-12-09 10:23:42.865540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:12.269 [2024-12-09 10:23:42.865553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:12.269 [2024-12-09 10:23:42.865565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:12.269 [2024-12-09 10:23:42.865578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:12.269 [2024-12-09 10:23:42.865591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:12.269 [2024-12-09 10:23:42.865604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:12.269 [2024-12-09 10:23:42.865617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:12.269 [2024-12-09 10:23:42.865630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:12.269 [2024-12-09 10:23:42.865652] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:12.269 [2024-12-09 10:23:42.865672] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c37a29d1-b9ea-48f7-b142-8cb3a1c0b0d7 00:32:12.269 [2024-12-09 10:23:42.865686] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:32:12.269 [2024-12-09 10:23:42.865699] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 15552 00:32:12.269 [2024-12-09 10:23:42.865711] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 14592 00:32:12.269 [2024-12-09 10:23:42.865732] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0658 00:32:12.269 [2024-12-09 10:23:42.865744] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:12.269 [2024-12-09 10:23:42.865770] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:12.269 [2024-12-09 10:23:42.865783] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:12.269 [2024-12-09 10:23:42.865794] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:12.269 [2024-12-09 10:23:42.865805] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:12.269 [2024-12-09 10:23:42.865817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.269 [2024-12-09 10:23:42.865845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:12.269 [2024-12-09 10:23:42.865860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.451 ms 00:32:12.269 [2024-12-09 10:23:42.865942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.269 [2024-12-09 10:23:42.879810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.269 [2024-12-09 10:23:42.880010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:12.269 [2024-12-09 10:23:42.880165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.807 ms 00:32:12.269 [2024-12-09 10:23:42.880224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.269 [2024-12-09 10:23:42.880756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.269 [2024-12-09 10:23:42.880949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:12.269 [2024-12-09 10:23:42.881067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.456 ms 00:32:12.269 [2024-12-09 10:23:42.881185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.269 [2024-12-09 10:23:42.919161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:12.269 [2024-12-09 10:23:42.919349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:12.269 [2024-12-09 10:23:42.919469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:12.269 [2024-12-09 10:23:42.919525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.269 [2024-12-09 10:23:42.919621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:12.269 [2024-12-09 10:23:42.919802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:12.269 [2024-12-09 10:23:42.919888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:12.269 [2024-12-09 10:23:42.919937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.269 [2024-12-09 10:23:42.920176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:12.269 [2024-12-09 10:23:42.920256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:12.269 [2024-12-09 10:23:42.920407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:12.269 [2024-12-09 10:23:42.920480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.269 [2024-12-09 10:23:42.920542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:12.269 [2024-12-09 10:23:42.920683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:12.269 [2024-12-09 10:23:42.920743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:12.269 [2024-12-09 10:23:42.920787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.269 [2024-12-09 10:23:43.010323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:12.269 [2024-12-09 10:23:43.010649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:12.269 [2024-12-09 10:23:43.010771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:12.269 [2024-12-09 10:23:43.010825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.527 [2024-12-09 10:23:43.081448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:12.527 [2024-12-09 10:23:43.081713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:12.527 [2024-12-09 10:23:43.081852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:12.527 [2024-12-09 10:23:43.081914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.527 [2024-12-09 10:23:43.082154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:12.527 [2024-12-09 10:23:43.082293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:12.527 [2024-12-09 10:23:43.082424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:12.527 [2024-12-09 10:23:43.082555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.527 [2024-12-09 10:23:43.082708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:12.527 [2024-12-09 10:23:43.082820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:12.527 [2024-12-09 10:23:43.082973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:12.527 [2024-12-09 10:23:43.083028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.527 [2024-12-09 10:23:43.083339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:12.527 [2024-12-09 10:23:43.083480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:12.527 [2024-12-09 10:23:43.083611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:12.527 [2024-12-09 10:23:43.083735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.527 [2024-12-09 10:23:43.083862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:12.527 [2024-12-09 10:23:43.083971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:12.527 [2024-12-09 10:23:43.084087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:12.527 [2024-12-09 10:23:43.084143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.527 [2024-12-09 10:23:43.084295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:12.527 [2024-12-09 10:23:43.084379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:12.527 [2024-12-09 10:23:43.084534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:12.527 [2024-12-09 10:23:43.084617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.527 [2024-12-09 10:23:43.084694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:12.527 [2024-12-09 10:23:43.084717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:12.527 [2024-12-09 10:23:43.084732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:12.527 [2024-12-09 10:23:43.084746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.527 [2024-12-09 10:23:43.084971] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 526.389 ms, result 0 00:32:13.465 00:32:13.465 00:32:13.465 10:23:44 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:32:15.374 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:32:15.374 10:23:45 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:32:15.374 10:23:45 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:32:15.374 10:23:45 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:32:15.374 10:23:46 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:32:15.374 10:23:46 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:32:15.374 10:23:46 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 79648 00:32:15.374 10:23:46 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79648 ']' 00:32:15.374 10:23:46 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79648 00:32:15.374 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (79648) - No such process 00:32:15.374 10:23:46 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 79648 is not found' 00:32:15.374 Process with pid 79648 is not found 00:32:15.374 10:23:46 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:32:15.374 Remove shared memory files 00:32:15.374 10:23:46 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:32:15.374 10:23:46 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:32:15.374 10:23:46 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:32:15.374 10:23:46 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:32:15.374 10:23:46 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:32:15.374 10:23:46 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:32:15.374 ************************************ 00:32:15.374 END TEST ftl_restore 00:32:15.374 ************************************ 00:32:15.374 00:32:15.374 real 3m35.009s 00:32:15.374 user 3m19.216s 00:32:15.374 sys 0m17.894s 00:32:15.374 10:23:46 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:15.374 10:23:46 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:32:15.374 10:23:46 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:32:15.374 10:23:46 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:32:15.374 10:23:46 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:15.375 10:23:46 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:15.375 ************************************ 00:32:15.375 START TEST ftl_dirty_shutdown 00:32:15.375 ************************************ 00:32:15.375 10:23:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:32:15.634 * Looking for test storage... 00:32:15.634 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:32:15.634 10:23:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:15.634 10:23:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:32:15.634 10:23:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:15.634 10:23:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:15.634 10:23:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:15.634 10:23:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:15.634 10:23:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:15.634 10:23:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:32:15.634 10:23:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:32:15.634 10:23:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:32:15.634 10:23:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:32:15.634 10:23:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:32:15.634 10:23:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:32:15.634 10:23:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:32:15.634 10:23:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:15.634 10:23:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:32:15.634 10:23:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:32:15.634 10:23:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:15.634 10:23:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:15.634 10:23:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:32:15.635 10:23:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:32:15.635 10:23:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:15.635 10:23:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:32:15.635 10:23:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:32:15.635 10:23:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:32:15.635 10:23:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:32:15.635 10:23:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:15.635 10:23:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:32:15.635 10:23:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:32:15.635 10:23:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:15.635 10:23:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:15.635 10:23:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:32:15.635 10:23:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:15.635 10:23:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:15.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:15.635 --rc genhtml_branch_coverage=1 00:32:15.635 --rc genhtml_function_coverage=1 00:32:15.635 --rc genhtml_legend=1 00:32:15.635 --rc geninfo_all_blocks=1 00:32:15.635 --rc geninfo_unexecuted_blocks=1 00:32:15.635 00:32:15.635 ' 00:32:15.635 10:23:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:15.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:15.635 --rc genhtml_branch_coverage=1 00:32:15.635 --rc genhtml_function_coverage=1 00:32:15.635 --rc genhtml_legend=1 00:32:15.635 --rc geninfo_all_blocks=1 00:32:15.635 --rc geninfo_unexecuted_blocks=1 00:32:15.635 00:32:15.635 ' 00:32:15.635 10:23:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:15.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:15.635 --rc genhtml_branch_coverage=1 00:32:15.635 --rc genhtml_function_coverage=1 00:32:15.635 --rc genhtml_legend=1 00:32:15.635 --rc geninfo_all_blocks=1 00:32:15.635 --rc geninfo_unexecuted_blocks=1 00:32:15.635 00:32:15.635 ' 00:32:15.635 10:23:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:15.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:15.635 --rc genhtml_branch_coverage=1 00:32:15.635 --rc genhtml_function_coverage=1 00:32:15.635 --rc genhtml_legend=1 00:32:15.635 --rc geninfo_all_blocks=1 00:32:15.635 --rc geninfo_unexecuted_blocks=1 00:32:15.635 00:32:15.635 ' 00:32:15.635 10:23:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:32:15.635 10:23:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:32:15.635 10:23:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:32:15.635 10:23:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:32:15.635 10:23:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:32:15.635 10:23:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:32:15.635 10:23:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:15.635 10:23:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:32:15.635 10:23:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:32:15.635 10:23:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:15.635 10:23:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:15.635 10:23:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:32:15.635 10:23:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:32:15.635 10:23:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:15.635 10:23:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:15.635 10:23:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:32:15.635 10:23:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:32:15.635 10:23:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:15.635 10:23:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:15.635 10:23:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:32:15.635 10:23:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:32:15.635 10:23:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:32:15.635 10:23:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:32:15.635 10:23:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:32:15.635 10:23:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:32:15.635 10:23:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:32:15.635 10:23:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:32:15.635 10:23:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:15.635 10:23:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:15.635 10:23:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:15.635 10:23:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:15.635 10:23:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:32:15.635 10:23:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:32:15.635 10:23:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:32:15.635 10:23:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:32:15.635 10:23:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:32:15.635 10:23:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:32:15.635 10:23:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:32:15.635 10:23:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:32:15.635 10:23:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:32:15.635 10:23:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:32:15.635 10:23:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:32:15.635 10:23:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=81851 00:32:15.636 10:23:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 81851 00:32:15.636 10:23:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 81851 ']' 00:32:15.636 10:23:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:15.636 10:23:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:15.636 10:23:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:15.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:15.636 10:23:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:15.636 10:23:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:15.636 10:23:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:32:15.895 [2024-12-09 10:23:46.492153] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:32:15.895 [2024-12-09 10:23:46.492334] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81851 ] 00:32:15.895 [2024-12-09 10:23:46.687289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:16.183 [2024-12-09 10:23:46.850196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:17.152 10:23:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:17.152 10:23:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:32:17.152 10:23:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:32:17.152 10:23:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:32:17.152 10:23:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:32:17.152 10:23:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:32:17.152 10:23:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:32:17.152 10:23:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:32:17.410 10:23:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:32:17.410 10:23:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:32:17.410 10:23:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:32:17.410 10:23:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:32:17.410 10:23:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:32:17.410 10:23:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:32:17.410 10:23:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:32:17.410 10:23:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:32:17.669 10:23:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:32:17.669 { 00:32:17.669 "name": "nvme0n1", 00:32:17.669 "aliases": [ 00:32:17.669 "82f366b1-1a14-4c60-8761-3a02e07bda6b" 00:32:17.669 ], 00:32:17.669 "product_name": "NVMe disk", 00:32:17.669 "block_size": 4096, 00:32:17.669 "num_blocks": 1310720, 00:32:17.669 "uuid": "82f366b1-1a14-4c60-8761-3a02e07bda6b", 00:32:17.669 "numa_id": -1, 00:32:17.669 "assigned_rate_limits": { 00:32:17.669 "rw_ios_per_sec": 0, 00:32:17.669 "rw_mbytes_per_sec": 0, 00:32:17.669 "r_mbytes_per_sec": 0, 00:32:17.669 "w_mbytes_per_sec": 0 00:32:17.669 }, 00:32:17.669 "claimed": true, 00:32:17.669 "claim_type": "read_many_write_one", 00:32:17.669 "zoned": false, 00:32:17.669 "supported_io_types": { 00:32:17.669 "read": true, 00:32:17.669 "write": true, 00:32:17.669 "unmap": true, 00:32:17.669 "flush": true, 00:32:17.669 "reset": true, 00:32:17.669 "nvme_admin": true, 00:32:17.669 "nvme_io": true, 00:32:17.669 "nvme_io_md": false, 00:32:17.669 "write_zeroes": true, 00:32:17.669 "zcopy": false, 00:32:17.669 "get_zone_info": false, 00:32:17.669 "zone_management": false, 00:32:17.669 "zone_append": false, 00:32:17.669 "compare": true, 00:32:17.669 "compare_and_write": false, 00:32:17.669 "abort": true, 00:32:17.669 "seek_hole": false, 00:32:17.669 "seek_data": false, 00:32:17.669 "copy": true, 00:32:17.669 "nvme_iov_md": false 00:32:17.669 }, 00:32:17.669 "driver_specific": { 00:32:17.669 "nvme": [ 00:32:17.669 { 00:32:17.669 "pci_address": "0000:00:11.0", 00:32:17.669 "trid": { 00:32:17.669 "trtype": "PCIe", 00:32:17.669 "traddr": "0000:00:11.0" 00:32:17.669 }, 00:32:17.669 "ctrlr_data": { 00:32:17.669 "cntlid": 0, 00:32:17.669 "vendor_id": "0x1b36", 00:32:17.669 "model_number": "QEMU NVMe Ctrl", 00:32:17.669 "serial_number": "12341", 00:32:17.669 "firmware_revision": "8.0.0", 00:32:17.669 "subnqn": "nqn.2019-08.org.qemu:12341", 00:32:17.669 "oacs": { 00:32:17.669 "security": 0, 00:32:17.669 "format": 1, 00:32:17.669 "firmware": 0, 00:32:17.669 "ns_manage": 1 00:32:17.669 }, 00:32:17.669 "multi_ctrlr": false, 00:32:17.669 "ana_reporting": false 00:32:17.669 }, 00:32:17.669 "vs": { 00:32:17.669 "nvme_version": "1.4" 00:32:17.669 }, 00:32:17.669 "ns_data": { 00:32:17.669 "id": 1, 00:32:17.669 "can_share": false 00:32:17.669 } 00:32:17.669 } 00:32:17.669 ], 00:32:17.669 "mp_policy": "active_passive" 00:32:17.669 } 00:32:17.669 } 00:32:17.669 ]' 00:32:17.669 10:23:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:32:17.669 10:23:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:32:17.669 10:23:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:32:17.669 10:23:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:32:17.669 10:23:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:32:17.669 10:23:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:32:17.669 10:23:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:32:17.669 10:23:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:32:17.669 10:23:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:32:17.669 10:23:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:17.669 10:23:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:32:17.928 10:23:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=6279a32a-519c-42f5-9eb2-82118898d67f 00:32:17.928 10:23:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:32:17.928 10:23:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6279a32a-519c-42f5-9eb2-82118898d67f 00:32:18.187 10:23:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:32:18.446 10:23:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=64ea8f8d-1847-46f8-8bf6-227b4f515734 00:32:18.446 10:23:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 64ea8f8d-1847-46f8-8bf6-227b4f515734 00:32:18.705 10:23:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=971ea55e-785b-4372-b470-c35dc00e7a9e 00:32:18.705 10:23:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:32:18.705 10:23:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 971ea55e-785b-4372-b470-c35dc00e7a9e 00:32:18.705 10:23:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:32:18.705 10:23:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:32:18.705 10:23:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=971ea55e-785b-4372-b470-c35dc00e7a9e 00:32:18.705 10:23:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:32:18.705 10:23:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 971ea55e-785b-4372-b470-c35dc00e7a9e 00:32:18.705 10:23:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=971ea55e-785b-4372-b470-c35dc00e7a9e 00:32:18.705 10:23:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:32:18.705 10:23:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:32:18.705 10:23:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:32:18.705 10:23:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 971ea55e-785b-4372-b470-c35dc00e7a9e 00:32:18.964 10:23:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:32:18.964 { 00:32:18.964 "name": "971ea55e-785b-4372-b470-c35dc00e7a9e", 00:32:18.964 "aliases": [ 00:32:18.964 "lvs/nvme0n1p0" 00:32:18.964 ], 00:32:18.964 "product_name": "Logical Volume", 00:32:18.964 "block_size": 4096, 00:32:18.964 "num_blocks": 26476544, 00:32:18.964 "uuid": "971ea55e-785b-4372-b470-c35dc00e7a9e", 00:32:18.964 "assigned_rate_limits": { 00:32:18.964 "rw_ios_per_sec": 0, 00:32:18.964 "rw_mbytes_per_sec": 0, 00:32:18.964 "r_mbytes_per_sec": 0, 00:32:18.964 "w_mbytes_per_sec": 0 00:32:18.964 }, 00:32:18.964 "claimed": false, 00:32:18.964 "zoned": false, 00:32:18.964 "supported_io_types": { 00:32:18.965 "read": true, 00:32:18.965 "write": true, 00:32:18.965 "unmap": true, 00:32:18.965 "flush": false, 00:32:18.965 "reset": true, 00:32:18.965 "nvme_admin": false, 00:32:18.965 "nvme_io": false, 00:32:18.965 "nvme_io_md": false, 00:32:18.965 "write_zeroes": true, 00:32:18.965 "zcopy": false, 00:32:18.965 "get_zone_info": false, 00:32:18.965 "zone_management": false, 00:32:18.965 "zone_append": false, 00:32:18.965 "compare": false, 00:32:18.965 "compare_and_write": false, 00:32:18.965 "abort": false, 00:32:18.965 "seek_hole": true, 00:32:18.965 "seek_data": true, 00:32:18.965 "copy": false, 00:32:18.965 "nvme_iov_md": false 00:32:18.965 }, 00:32:18.965 "driver_specific": { 00:32:18.965 "lvol": { 00:32:18.965 "lvol_store_uuid": "64ea8f8d-1847-46f8-8bf6-227b4f515734", 00:32:18.965 "base_bdev": "nvme0n1", 00:32:18.965 "thin_provision": true, 00:32:18.965 "num_allocated_clusters": 0, 00:32:18.965 "snapshot": false, 00:32:18.965 "clone": false, 00:32:18.965 "esnap_clone": false 00:32:18.965 } 00:32:18.965 } 00:32:18.965 } 00:32:18.965 ]' 00:32:18.965 10:23:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:32:18.965 10:23:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:32:18.965 10:23:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:32:18.965 10:23:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:32:18.965 10:23:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:32:18.965 10:23:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:32:18.965 10:23:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:32:18.965 10:23:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:32:18.965 10:23:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:32:19.223 10:23:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:32:19.223 10:23:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:32:19.223 10:23:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 971ea55e-785b-4372-b470-c35dc00e7a9e 00:32:19.223 10:23:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=971ea55e-785b-4372-b470-c35dc00e7a9e 00:32:19.223 10:23:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:32:19.223 10:23:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:32:19.223 10:23:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:32:19.223 10:23:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 971ea55e-785b-4372-b470-c35dc00e7a9e 00:32:19.483 10:23:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:32:19.483 { 00:32:19.483 "name": "971ea55e-785b-4372-b470-c35dc00e7a9e", 00:32:19.483 "aliases": [ 00:32:19.483 "lvs/nvme0n1p0" 00:32:19.483 ], 00:32:19.483 "product_name": "Logical Volume", 00:32:19.483 "block_size": 4096, 00:32:19.483 "num_blocks": 26476544, 00:32:19.483 "uuid": "971ea55e-785b-4372-b470-c35dc00e7a9e", 00:32:19.483 "assigned_rate_limits": { 00:32:19.483 "rw_ios_per_sec": 0, 00:32:19.483 "rw_mbytes_per_sec": 0, 00:32:19.483 "r_mbytes_per_sec": 0, 00:32:19.483 "w_mbytes_per_sec": 0 00:32:19.483 }, 00:32:19.483 "claimed": false, 00:32:19.483 "zoned": false, 00:32:19.483 "supported_io_types": { 00:32:19.483 "read": true, 00:32:19.483 "write": true, 00:32:19.483 "unmap": true, 00:32:19.483 "flush": false, 00:32:19.483 "reset": true, 00:32:19.483 "nvme_admin": false, 00:32:19.483 "nvme_io": false, 00:32:19.483 "nvme_io_md": false, 00:32:19.483 "write_zeroes": true, 00:32:19.483 "zcopy": false, 00:32:19.483 "get_zone_info": false, 00:32:19.483 "zone_management": false, 00:32:19.483 "zone_append": false, 00:32:19.483 "compare": false, 00:32:19.483 "compare_and_write": false, 00:32:19.483 "abort": false, 00:32:19.483 "seek_hole": true, 00:32:19.483 "seek_data": true, 00:32:19.483 "copy": false, 00:32:19.483 "nvme_iov_md": false 00:32:19.483 }, 00:32:19.483 "driver_specific": { 00:32:19.483 "lvol": { 00:32:19.483 "lvol_store_uuid": "64ea8f8d-1847-46f8-8bf6-227b4f515734", 00:32:19.483 "base_bdev": "nvme0n1", 00:32:19.483 "thin_provision": true, 00:32:19.483 "num_allocated_clusters": 0, 00:32:19.483 "snapshot": false, 00:32:19.483 "clone": false, 00:32:19.483 "esnap_clone": false 00:32:19.483 } 00:32:19.483 } 00:32:19.483 } 00:32:19.483 ]' 00:32:19.483 10:23:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:32:19.483 10:23:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:32:19.483 10:23:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:32:19.483 10:23:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:32:19.483 10:23:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:32:19.483 10:23:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:32:19.483 10:23:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:32:19.483 10:23:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:32:19.742 10:23:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:32:19.742 10:23:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 971ea55e-785b-4372-b470-c35dc00e7a9e 00:32:19.742 10:23:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=971ea55e-785b-4372-b470-c35dc00e7a9e 00:32:19.742 10:23:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:32:19.742 10:23:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:32:19.742 10:23:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:32:19.742 10:23:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 971ea55e-785b-4372-b470-c35dc00e7a9e 00:32:20.002 10:23:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:32:20.002 { 00:32:20.002 "name": "971ea55e-785b-4372-b470-c35dc00e7a9e", 00:32:20.002 "aliases": [ 00:32:20.002 "lvs/nvme0n1p0" 00:32:20.002 ], 00:32:20.002 "product_name": "Logical Volume", 00:32:20.002 "block_size": 4096, 00:32:20.002 "num_blocks": 26476544, 00:32:20.002 "uuid": "971ea55e-785b-4372-b470-c35dc00e7a9e", 00:32:20.002 "assigned_rate_limits": { 00:32:20.002 "rw_ios_per_sec": 0, 00:32:20.002 "rw_mbytes_per_sec": 0, 00:32:20.002 "r_mbytes_per_sec": 0, 00:32:20.002 "w_mbytes_per_sec": 0 00:32:20.002 }, 00:32:20.002 "claimed": false, 00:32:20.002 "zoned": false, 00:32:20.002 "supported_io_types": { 00:32:20.002 "read": true, 00:32:20.002 "write": true, 00:32:20.002 "unmap": true, 00:32:20.002 "flush": false, 00:32:20.002 "reset": true, 00:32:20.002 "nvme_admin": false, 00:32:20.002 "nvme_io": false, 00:32:20.002 "nvme_io_md": false, 00:32:20.002 "write_zeroes": true, 00:32:20.002 "zcopy": false, 00:32:20.002 "get_zone_info": false, 00:32:20.002 "zone_management": false, 00:32:20.002 "zone_append": false, 00:32:20.002 "compare": false, 00:32:20.002 "compare_and_write": false, 00:32:20.002 "abort": false, 00:32:20.002 "seek_hole": true, 00:32:20.002 "seek_data": true, 00:32:20.002 "copy": false, 00:32:20.002 "nvme_iov_md": false 00:32:20.002 }, 00:32:20.002 "driver_specific": { 00:32:20.002 "lvol": { 00:32:20.002 "lvol_store_uuid": "64ea8f8d-1847-46f8-8bf6-227b4f515734", 00:32:20.002 "base_bdev": "nvme0n1", 00:32:20.002 "thin_provision": true, 00:32:20.002 "num_allocated_clusters": 0, 00:32:20.002 "snapshot": false, 00:32:20.002 "clone": false, 00:32:20.002 "esnap_clone": false 00:32:20.002 } 00:32:20.002 } 00:32:20.002 } 00:32:20.002 ]' 00:32:20.002 10:23:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:32:20.263 10:23:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:32:20.263 10:23:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:32:20.263 10:23:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:32:20.263 10:23:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:32:20.263 10:23:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:32:20.263 10:23:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:32:20.263 10:23:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 971ea55e-785b-4372-b470-c35dc00e7a9e --l2p_dram_limit 10' 00:32:20.263 10:23:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:32:20.263 10:23:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:32:20.263 10:23:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:32:20.263 10:23:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 971ea55e-785b-4372-b470-c35dc00e7a9e --l2p_dram_limit 10 -c nvc0n1p0 00:32:20.522 [2024-12-09 10:23:51.111158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.522 [2024-12-09 10:23:51.111393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:20.522 [2024-12-09 10:23:51.111437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:32:20.522 [2024-12-09 10:23:51.111453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.522 [2024-12-09 10:23:51.111577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.522 [2024-12-09 10:23:51.111600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:20.522 [2024-12-09 10:23:51.111619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:32:20.522 [2024-12-09 10:23:51.111633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.522 [2024-12-09 10:23:51.111681] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:20.522 [2024-12-09 10:23:51.112659] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:20.522 [2024-12-09 10:23:51.112701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.522 [2024-12-09 10:23:51.112732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:20.522 [2024-12-09 10:23:51.112750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.035 ms 00:32:20.522 [2024-12-09 10:23:51.112763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.522 [2024-12-09 10:23:51.112950] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 36a3d27b-2eba-4fbc-84d4-d4a6a9f46810 00:32:20.522 [2024-12-09 10:23:51.115390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.522 [2024-12-09 10:23:51.115453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:32:20.522 [2024-12-09 10:23:51.115473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:32:20.522 [2024-12-09 10:23:51.115489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.522 [2024-12-09 10:23:51.128530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.522 [2024-12-09 10:23:51.128591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:20.522 [2024-12-09 10:23:51.128611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.958 ms 00:32:20.523 [2024-12-09 10:23:51.128625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.523 [2024-12-09 10:23:51.128753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.523 [2024-12-09 10:23:51.128779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:20.523 [2024-12-09 10:23:51.128794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:32:20.523 [2024-12-09 10:23:51.128813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.523 [2024-12-09 10:23:51.128922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.523 [2024-12-09 10:23:51.128950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:20.523 [2024-12-09 10:23:51.128969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:32:20.523 [2024-12-09 10:23:51.128983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.523 [2024-12-09 10:23:51.129021] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:20.523 [2024-12-09 10:23:51.134117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.523 [2024-12-09 10:23:51.134158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:20.523 [2024-12-09 10:23:51.134182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.103 ms 00:32:20.523 [2024-12-09 10:23:51.134194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.523 [2024-12-09 10:23:51.134245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.523 [2024-12-09 10:23:51.134264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:20.523 [2024-12-09 10:23:51.134281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:32:20.523 [2024-12-09 10:23:51.134293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.523 [2024-12-09 10:23:51.134343] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:32:20.523 [2024-12-09 10:23:51.134491] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:20.523 [2024-12-09 10:23:51.134516] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:20.523 [2024-12-09 10:23:51.134532] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:20.523 [2024-12-09 10:23:51.134550] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:20.523 [2024-12-09 10:23:51.134592] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:20.523 [2024-12-09 10:23:51.134613] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:20.523 [2024-12-09 10:23:51.134627] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:20.523 [2024-12-09 10:23:51.134648] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:20.523 [2024-12-09 10:23:51.134661] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:20.523 [2024-12-09 10:23:51.134678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.523 [2024-12-09 10:23:51.134704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:20.523 [2024-12-09 10:23:51.134721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.338 ms 00:32:20.523 [2024-12-09 10:23:51.134734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.523 [2024-12-09 10:23:51.134826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.523 [2024-12-09 10:23:51.134869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:20.523 [2024-12-09 10:23:51.134909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:32:20.523 [2024-12-09 10:23:51.134921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.523 [2024-12-09 10:23:51.135057] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:20.523 [2024-12-09 10:23:51.135080] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:20.523 [2024-12-09 10:23:51.135096] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:20.523 [2024-12-09 10:23:51.135109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:20.523 [2024-12-09 10:23:51.135126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:20.523 [2024-12-09 10:23:51.135137] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:20.523 [2024-12-09 10:23:51.135151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:20.523 [2024-12-09 10:23:51.135162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:20.523 [2024-12-09 10:23:51.135176] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:20.523 [2024-12-09 10:23:51.135187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:20.523 [2024-12-09 10:23:51.135203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:20.523 [2024-12-09 10:23:51.135215] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:20.523 [2024-12-09 10:23:51.135244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:20.523 [2024-12-09 10:23:51.135272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:20.523 [2024-12-09 10:23:51.135287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:20.523 [2024-12-09 10:23:51.135299] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:20.523 [2024-12-09 10:23:51.135316] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:20.523 [2024-12-09 10:23:51.135328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:20.523 [2024-12-09 10:23:51.135344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:20.523 [2024-12-09 10:23:51.135357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:20.523 [2024-12-09 10:23:51.135372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:20.523 [2024-12-09 10:23:51.135385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:20.523 [2024-12-09 10:23:51.135400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:20.523 [2024-12-09 10:23:51.135412] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:20.523 [2024-12-09 10:23:51.135426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:20.523 [2024-12-09 10:23:51.135438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:20.523 [2024-12-09 10:23:51.135453] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:20.523 [2024-12-09 10:23:51.135465] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:20.523 [2024-12-09 10:23:51.135479] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:20.523 [2024-12-09 10:23:51.135492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:20.523 [2024-12-09 10:23:51.135506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:20.523 [2024-12-09 10:23:51.135517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:20.523 [2024-12-09 10:23:51.135535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:20.523 [2024-12-09 10:23:51.135548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:20.523 [2024-12-09 10:23:51.135563] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:20.523 [2024-12-09 10:23:51.135574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:20.523 [2024-12-09 10:23:51.135591] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:20.523 [2024-12-09 10:23:51.135603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:20.523 [2024-12-09 10:23:51.135618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:20.523 [2024-12-09 10:23:51.135630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:20.523 [2024-12-09 10:23:51.135646] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:20.523 [2024-12-09 10:23:51.135659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:20.523 [2024-12-09 10:23:51.135689] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:20.523 [2024-12-09 10:23:51.135700] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:20.523 [2024-12-09 10:23:51.135728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:20.523 [2024-12-09 10:23:51.135757] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:20.523 [2024-12-09 10:23:51.135773] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:20.523 [2024-12-09 10:23:51.135787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:20.524 [2024-12-09 10:23:51.135805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:20.524 [2024-12-09 10:23:51.135817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:20.524 [2024-12-09 10:23:51.135833] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:20.524 [2024-12-09 10:23:51.135845] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:20.524 [2024-12-09 10:23:51.135860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:20.524 [2024-12-09 10:23:51.135874] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:20.524 [2024-12-09 10:23:51.135897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:20.524 [2024-12-09 10:23:51.135911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:20.524 [2024-12-09 10:23:51.135946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:20.524 [2024-12-09 10:23:51.135960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:20.524 [2024-12-09 10:23:51.135975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:20.524 [2024-12-09 10:23:51.135987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:20.524 [2024-12-09 10:23:51.136004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:20.524 [2024-12-09 10:23:51.136017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:20.524 [2024-12-09 10:23:51.136032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:20.524 [2024-12-09 10:23:51.136045] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:20.524 [2024-12-09 10:23:51.136063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:20.524 [2024-12-09 10:23:51.136076] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:20.524 [2024-12-09 10:23:51.136091] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:20.524 [2024-12-09 10:23:51.136103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:20.524 [2024-12-09 10:23:51.136132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:20.524 [2024-12-09 10:23:51.136162] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:20.524 [2024-12-09 10:23:51.136179] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:20.524 [2024-12-09 10:23:51.136193] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:20.524 [2024-12-09 10:23:51.136209] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:20.524 [2024-12-09 10:23:51.136223] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:20.524 [2024-12-09 10:23:51.136238] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:20.524 [2024-12-09 10:23:51.136252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.524 [2024-12-09 10:23:51.136269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:20.524 [2024-12-09 10:23:51.136283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.258 ms 00:32:20.524 [2024-12-09 10:23:51.136298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.524 [2024-12-09 10:23:51.136360] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:32:20.524 [2024-12-09 10:23:51.136388] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:32:23.814 [2024-12-09 10:23:54.309808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:23.814 [2024-12-09 10:23:54.309903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:32:23.814 [2024-12-09 10:23:54.309928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3173.460 ms 00:32:23.814 [2024-12-09 10:23:54.309945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:23.814 [2024-12-09 10:23:54.349184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:23.814 [2024-12-09 10:23:54.349602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:23.814 [2024-12-09 10:23:54.349638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.789 ms 00:32:23.814 [2024-12-09 10:23:54.349665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:23.814 [2024-12-09 10:23:54.349939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:23.814 [2024-12-09 10:23:54.349970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:23.814 [2024-12-09 10:23:54.349987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:32:23.814 [2024-12-09 10:23:54.350013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:23.814 [2024-12-09 10:23:54.395202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:23.814 [2024-12-09 10:23:54.395574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:23.814 [2024-12-09 10:23:54.395609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.121 ms 00:32:23.814 [2024-12-09 10:23:54.395632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:23.814 [2024-12-09 10:23:54.395711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:23.814 [2024-12-09 10:23:54.395741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:23.814 [2024-12-09 10:23:54.395757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:23.814 [2024-12-09 10:23:54.395790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:23.814 [2024-12-09 10:23:54.396566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:23.814 [2024-12-09 10:23:54.396601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:23.814 [2024-12-09 10:23:54.396619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.585 ms 00:32:23.814 [2024-12-09 10:23:54.396635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:23.814 [2024-12-09 10:23:54.396813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:23.814 [2024-12-09 10:23:54.396862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:23.814 [2024-12-09 10:23:54.396885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:32:23.814 [2024-12-09 10:23:54.396904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:23.814 [2024-12-09 10:23:54.419277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:23.814 [2024-12-09 10:23:54.419356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:23.814 [2024-12-09 10:23:54.419386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.337 ms 00:32:23.814 [2024-12-09 10:23:54.419403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:23.814 [2024-12-09 10:23:54.441410] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:23.814 [2024-12-09 10:23:54.446055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:23.814 [2024-12-09 10:23:54.446104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:23.814 [2024-12-09 10:23:54.446132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.452 ms 00:32:23.814 [2024-12-09 10:23:54.446146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:23.814 [2024-12-09 10:23:54.528696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:23.814 [2024-12-09 10:23:54.528785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:32:23.814 [2024-12-09 10:23:54.528813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.461 ms 00:32:23.814 [2024-12-09 10:23:54.528866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:23.814 [2024-12-09 10:23:54.529150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:23.814 [2024-12-09 10:23:54.529178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:23.814 [2024-12-09 10:23:54.529201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.201 ms 00:32:23.814 [2024-12-09 10:23:54.529214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:23.814 [2024-12-09 10:23:54.557047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:23.814 [2024-12-09 10:23:54.557124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:32:23.814 [2024-12-09 10:23:54.557152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.716 ms 00:32:23.814 [2024-12-09 10:23:54.557166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:23.814 [2024-12-09 10:23:54.583629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:23.815 [2024-12-09 10:23:54.583692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:32:23.815 [2024-12-09 10:23:54.583720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.380 ms 00:32:23.815 [2024-12-09 10:23:54.583734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:23.815 [2024-12-09 10:23:54.584568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:23.815 [2024-12-09 10:23:54.584606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:23.815 [2024-12-09 10:23:54.584628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.774 ms 00:32:23.815 [2024-12-09 10:23:54.584645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.074 [2024-12-09 10:23:54.670454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.074 [2024-12-09 10:23:54.670542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:32:24.074 [2024-12-09 10:23:54.670599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.725 ms 00:32:24.074 [2024-12-09 10:23:54.670617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.074 [2024-12-09 10:23:54.701627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.074 [2024-12-09 10:23:54.702032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:32:24.074 [2024-12-09 10:23:54.702076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.836 ms 00:32:24.074 [2024-12-09 10:23:54.702093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.074 [2024-12-09 10:23:54.732117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.074 [2024-12-09 10:23:54.732196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:32:24.074 [2024-12-09 10:23:54.732223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.933 ms 00:32:24.074 [2024-12-09 10:23:54.732236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.074 [2024-12-09 10:23:54.758925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.074 [2024-12-09 10:23:54.759015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:24.074 [2024-12-09 10:23:54.759044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.610 ms 00:32:24.074 [2024-12-09 10:23:54.759058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.074 [2024-12-09 10:23:54.759125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.074 [2024-12-09 10:23:54.759147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:24.074 [2024-12-09 10:23:54.759186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:32:24.074 [2024-12-09 10:23:54.759199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.074 [2024-12-09 10:23:54.759321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.074 [2024-12-09 10:23:54.759347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:24.074 [2024-12-09 10:23:54.759365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:32:24.074 [2024-12-09 10:23:54.759379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.074 [2024-12-09 10:23:54.761102] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3649.327 ms, result 0 00:32:24.074 { 00:32:24.074 "name": "ftl0", 00:32:24.074 "uuid": "36a3d27b-2eba-4fbc-84d4-d4a6a9f46810" 00:32:24.075 } 00:32:24.075 10:23:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:32:24.075 10:23:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:32:24.333 10:23:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:32:24.333 10:23:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:32:24.333 10:23:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:32:24.900 /dev/nbd0 00:32:24.900 10:23:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:32:24.900 10:23:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:32:24.900 10:23:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:32:24.900 10:23:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:32:24.900 10:23:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:32:24.900 10:23:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:32:24.900 10:23:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:32:24.900 10:23:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:32:24.900 10:23:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:32:24.900 10:23:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:32:24.900 1+0 records in 00:32:24.900 1+0 records out 00:32:24.900 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000311823 s, 13.1 MB/s 00:32:24.900 10:23:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:32:24.900 10:23:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:32:24.900 10:23:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:32:24.900 10:23:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:32:24.900 10:23:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:32:24.900 10:23:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:32:24.900 [2024-12-09 10:23:55.549292] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:32:24.900 [2024-12-09 10:23:55.549500] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81999 ] 00:32:25.159 [2024-12-09 10:23:55.740534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:25.159 [2024-12-09 10:23:55.896018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:26.535  [2024-12-09T10:23:58.269Z] Copying: 166/1024 [MB] (166 MBps) [2024-12-09T10:23:59.649Z] Copying: 347/1024 [MB] (181 MBps) [2024-12-09T10:24:00.587Z] Copying: 520/1024 [MB] (173 MBps) [2024-12-09T10:24:01.524Z] Copying: 711/1024 [MB] (190 MBps) [2024-12-09T10:24:02.092Z] Copying: 911/1024 [MB] (199 MBps) [2024-12-09T10:24:03.026Z] Copying: 1024/1024 [MB] (average 183 MBps) 00:32:32.229 00:32:32.229 10:24:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:32:34.132 10:24:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:32:34.132 [2024-12-09 10:24:04.758439] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:32:34.132 [2024-12-09 10:24:04.758968] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82092 ] 00:32:34.391 [2024-12-09 10:24:04.930749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:34.391 [2024-12-09 10:24:05.089155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:35.765  [2024-12-09T10:24:07.495Z] Copying: 13/1024 [MB] (13 MBps) [2024-12-09T10:24:08.465Z] Copying: 27/1024 [MB] (14 MBps) [2024-12-09T10:24:09.401Z] Copying: 43/1024 [MB] (15 MBps) [2024-12-09T10:24:10.775Z] Copying: 59/1024 [MB] (15 MBps) [2024-12-09T10:24:11.711Z] Copying: 74/1024 [MB] (15 MBps) [2024-12-09T10:24:12.648Z] Copying: 90/1024 [MB] (15 MBps) [2024-12-09T10:24:13.584Z] Copying: 106/1024 [MB] (16 MBps) [2024-12-09T10:24:14.519Z] Copying: 120/1024 [MB] (13 MBps) [2024-12-09T10:24:15.452Z] Copying: 135/1024 [MB] (14 MBps) [2024-12-09T10:24:16.386Z] Copying: 149/1024 [MB] (13 MBps) [2024-12-09T10:24:17.762Z] Copying: 163/1024 [MB] (13 MBps) [2024-12-09T10:24:18.698Z] Copying: 177/1024 [MB] (14 MBps) [2024-12-09T10:24:19.634Z] Copying: 192/1024 [MB] (14 MBps) [2024-12-09T10:24:20.569Z] Copying: 206/1024 [MB] (14 MBps) [2024-12-09T10:24:21.505Z] Copying: 220/1024 [MB] (14 MBps) [2024-12-09T10:24:22.441Z] Copying: 234/1024 [MB] (13 MBps) [2024-12-09T10:24:23.816Z] Copying: 248/1024 [MB] (14 MBps) [2024-12-09T10:24:24.750Z] Copying: 262/1024 [MB] (14 MBps) [2024-12-09T10:24:25.686Z] Copying: 276/1024 [MB] (13 MBps) [2024-12-09T10:24:26.620Z] Copying: 290/1024 [MB] (13 MBps) [2024-12-09T10:24:27.554Z] Copying: 304/1024 [MB] (13 MBps) [2024-12-09T10:24:28.488Z] Copying: 318/1024 [MB] (14 MBps) [2024-12-09T10:24:29.423Z] Copying: 332/1024 [MB] (14 MBps) [2024-12-09T10:24:30.799Z] Copying: 346/1024 [MB] (14 MBps) [2024-12-09T10:24:31.733Z] Copying: 360/1024 [MB] (14 MBps) [2024-12-09T10:24:32.666Z] Copying: 374/1024 [MB] (14 MBps) [2024-12-09T10:24:33.600Z] Copying: 389/1024 [MB] (14 MBps) [2024-12-09T10:24:34.562Z] Copying: 402/1024 [MB] (13 MBps) [2024-12-09T10:24:35.497Z] Copying: 417/1024 [MB] (14 MBps) [2024-12-09T10:24:36.432Z] Copying: 431/1024 [MB] (13 MBps) [2024-12-09T10:24:37.808Z] Copying: 445/1024 [MB] (14 MBps) [2024-12-09T10:24:38.745Z] Copying: 459/1024 [MB] (14 MBps) [2024-12-09T10:24:39.681Z] Copying: 473/1024 [MB] (14 MBps) [2024-12-09T10:24:40.618Z] Copying: 488/1024 [MB] (14 MBps) [2024-12-09T10:24:41.554Z] Copying: 502/1024 [MB] (13 MBps) [2024-12-09T10:24:42.491Z] Copying: 516/1024 [MB] (14 MBps) [2024-12-09T10:24:43.426Z] Copying: 531/1024 [MB] (14 MBps) [2024-12-09T10:24:44.803Z] Copying: 545/1024 [MB] (14 MBps) [2024-12-09T10:24:45.737Z] Copying: 560/1024 [MB] (14 MBps) [2024-12-09T10:24:46.671Z] Copying: 575/1024 [MB] (14 MBps) [2024-12-09T10:24:47.607Z] Copying: 589/1024 [MB] (14 MBps) [2024-12-09T10:24:48.543Z] Copying: 603/1024 [MB] (14 MBps) [2024-12-09T10:24:49.486Z] Copying: 617/1024 [MB] (14 MBps) [2024-12-09T10:24:50.420Z] Copying: 632/1024 [MB] (14 MBps) [2024-12-09T10:24:51.798Z] Copying: 646/1024 [MB] (14 MBps) [2024-12-09T10:24:52.734Z] Copying: 660/1024 [MB] (14 MBps) [2024-12-09T10:24:53.670Z] Copying: 675/1024 [MB] (14 MBps) [2024-12-09T10:24:54.608Z] Copying: 690/1024 [MB] (14 MBps) [2024-12-09T10:24:55.546Z] Copying: 704/1024 [MB] (14 MBps) [2024-12-09T10:24:56.485Z] Copying: 719/1024 [MB] (14 MBps) [2024-12-09T10:24:57.423Z] Copying: 733/1024 [MB] (14 MBps) [2024-12-09T10:24:58.802Z] Copying: 748/1024 [MB] (14 MBps) [2024-12-09T10:24:59.739Z] Copying: 762/1024 [MB] (14 MBps) [2024-12-09T10:25:00.676Z] Copying: 777/1024 [MB] (14 MBps) [2024-12-09T10:25:01.617Z] Copying: 791/1024 [MB] (14 MBps) [2024-12-09T10:25:02.554Z] Copying: 806/1024 [MB] (14 MBps) [2024-12-09T10:25:03.491Z] Copying: 820/1024 [MB] (14 MBps) [2024-12-09T10:25:04.429Z] Copying: 835/1024 [MB] (14 MBps) [2024-12-09T10:25:05.805Z] Copying: 849/1024 [MB] (14 MBps) [2024-12-09T10:25:06.745Z] Copying: 863/1024 [MB] (14 MBps) [2024-12-09T10:25:07.684Z] Copying: 877/1024 [MB] (14 MBps) [2024-12-09T10:25:08.621Z] Copying: 892/1024 [MB] (14 MBps) [2024-12-09T10:25:09.559Z] Copying: 906/1024 [MB] (14 MBps) [2024-12-09T10:25:10.498Z] Copying: 921/1024 [MB] (14 MBps) [2024-12-09T10:25:11.436Z] Copying: 935/1024 [MB] (14 MBps) [2024-12-09T10:25:12.814Z] Copying: 950/1024 [MB] (14 MBps) [2024-12-09T10:25:13.750Z] Copying: 965/1024 [MB] (14 MBps) [2024-12-09T10:25:14.686Z] Copying: 979/1024 [MB] (14 MBps) [2024-12-09T10:25:15.630Z] Copying: 994/1024 [MB] (14 MBps) [2024-12-09T10:25:16.566Z] Copying: 1009/1024 [MB] (14 MBps) [2024-12-09T10:25:16.566Z] Copying: 1023/1024 [MB] (14 MBps) [2024-12-09T10:25:17.557Z] Copying: 1024/1024 [MB] (average 14 MBps) 00:33:46.760 00:33:46.760 10:25:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:33:46.760 10:25:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:33:47.019 10:25:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:33:47.278 [2024-12-09 10:25:17.932729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:47.278 [2024-12-09 10:25:17.932788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:47.278 [2024-12-09 10:25:17.932809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:33:47.278 [2024-12-09 10:25:17.932823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.278 [2024-12-09 10:25:17.932902] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:47.278 [2024-12-09 10:25:17.936316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:47.278 [2024-12-09 10:25:17.936347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:47.278 [2024-12-09 10:25:17.936364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.383 ms 00:33:47.278 [2024-12-09 10:25:17.936380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.278 [2024-12-09 10:25:17.938582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:47.278 [2024-12-09 10:25:17.938639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:47.278 [2024-12-09 10:25:17.938660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.154 ms 00:33:47.278 [2024-12-09 10:25:17.938672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.278 [2024-12-09 10:25:17.955889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:47.278 [2024-12-09 10:25:17.955928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:33:47.278 [2024-12-09 10:25:17.955965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.184 ms 00:33:47.278 [2024-12-09 10:25:17.955976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.278 [2024-12-09 10:25:17.961390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:47.278 [2024-12-09 10:25:17.961421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:33:47.278 [2024-12-09 10:25:17.961454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.370 ms 00:33:47.278 [2024-12-09 10:25:17.961464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.278 [2024-12-09 10:25:17.987386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:47.278 [2024-12-09 10:25:17.987423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:33:47.278 [2024-12-09 10:25:17.987441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.831 ms 00:33:47.278 [2024-12-09 10:25:17.987452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.278 [2024-12-09 10:25:18.003740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:47.278 [2024-12-09 10:25:18.003777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:33:47.278 [2024-12-09 10:25:18.003799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.240 ms 00:33:47.278 [2024-12-09 10:25:18.003809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.278 [2024-12-09 10:25:18.004017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:47.278 [2024-12-09 10:25:18.004039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:33:47.278 [2024-12-09 10:25:18.004055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:33:47.278 [2024-12-09 10:25:18.004066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.278 [2024-12-09 10:25:18.029214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:47.278 [2024-12-09 10:25:18.029250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:33:47.278 [2024-12-09 10:25:18.029267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.122 ms 00:33:47.278 [2024-12-09 10:25:18.029277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.278 [2024-12-09 10:25:18.054050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:47.278 [2024-12-09 10:25:18.054226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:33:47.278 [2024-12-09 10:25:18.054258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.729 ms 00:33:47.278 [2024-12-09 10:25:18.054270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.539 [2024-12-09 10:25:18.079641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:47.539 [2024-12-09 10:25:18.079694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:33:47.539 [2024-12-09 10:25:18.079728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.318 ms 00:33:47.539 [2024-12-09 10:25:18.079739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.539 [2024-12-09 10:25:18.104249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:47.539 [2024-12-09 10:25:18.104284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:33:47.539 [2024-12-09 10:25:18.104307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.349 ms 00:33:47.539 [2024-12-09 10:25:18.104316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.539 [2024-12-09 10:25:18.104359] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:47.539 [2024-12-09 10:25:18.104385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:33:47.539 [2024-12-09 10:25:18.104406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:47.539 [2024-12-09 10:25:18.104417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:47.539 [2024-12-09 10:25:18.104429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:47.539 [2024-12-09 10:25:18.104439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:47.539 [2024-12-09 10:25:18.104452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:47.539 [2024-12-09 10:25:18.104462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:47.539 [2024-12-09 10:25:18.104477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:47.539 [2024-12-09 10:25:18.104487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:47.539 [2024-12-09 10:25:18.104500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:47.539 [2024-12-09 10:25:18.104510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:47.539 [2024-12-09 10:25:18.104523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:47.539 [2024-12-09 10:25:18.104533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:47.539 [2024-12-09 10:25:18.104545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:47.539 [2024-12-09 10:25:18.104555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:47.539 [2024-12-09 10:25:18.104567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:47.539 [2024-12-09 10:25:18.104577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:47.539 [2024-12-09 10:25:18.104590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:47.539 [2024-12-09 10:25:18.104600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:47.539 [2024-12-09 10:25:18.104614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:47.539 [2024-12-09 10:25:18.104625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:47.539 [2024-12-09 10:25:18.104637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.104647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.104662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.104671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.104684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.104694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.104707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.104717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.104729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.104739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.104752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.104763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.104777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.104788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.104800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.104811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.104876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.104904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.104922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.104934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.104947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.104958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.104971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.104982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.104998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.105020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.105035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.105046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.105059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.105070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.105084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.105095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.105108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.105119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.105136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.105147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.105161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.105172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.105186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.105197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.105210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.105222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.105251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.105277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.105290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.105300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.105313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.105323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.105336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.105346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.105362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.105373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.105386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.105396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.105408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.105419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.105431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.105441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.105453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.105464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.105476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.105487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.105499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.105510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.105523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.105533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.105548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.105559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.105572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.105582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.105594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.105605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.105617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.105628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.105642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.105652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.105666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:47.540 [2024-12-09 10:25:18.105677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:47.541 [2024-12-09 10:25:18.105690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:47.541 [2024-12-09 10:25:18.105708] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:47.541 [2024-12-09 10:25:18.105720] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 36a3d27b-2eba-4fbc-84d4-d4a6a9f46810 00:33:47.541 [2024-12-09 10:25:18.105731] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:33:47.541 [2024-12-09 10:25:18.105744] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:33:47.541 [2024-12-09 10:25:18.105756] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:33:47.541 [2024-12-09 10:25:18.105768] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:33:47.541 [2024-12-09 10:25:18.105777] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:47.541 [2024-12-09 10:25:18.105789] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:47.541 [2024-12-09 10:25:18.105798] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:47.541 [2024-12-09 10:25:18.105809] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:47.541 [2024-12-09 10:25:18.105817] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:47.541 [2024-12-09 10:25:18.105829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:47.541 [2024-12-09 10:25:18.105839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:47.541 [2024-12-09 10:25:18.105851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.474 ms 00:33:47.541 [2024-12-09 10:25:18.105872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.541 [2024-12-09 10:25:18.121468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:47.541 [2024-12-09 10:25:18.121503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:47.541 [2024-12-09 10:25:18.121521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.550 ms 00:33:47.541 [2024-12-09 10:25:18.121531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.541 [2024-12-09 10:25:18.122078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:47.541 [2024-12-09 10:25:18.122105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:47.541 [2024-12-09 10:25:18.122122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.500 ms 00:33:47.541 [2024-12-09 10:25:18.122140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.541 [2024-12-09 10:25:18.169584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:47.541 [2024-12-09 10:25:18.169638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:47.541 [2024-12-09 10:25:18.169656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:47.541 [2024-12-09 10:25:18.169666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.541 [2024-12-09 10:25:18.169727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:47.541 [2024-12-09 10:25:18.169742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:47.541 [2024-12-09 10:25:18.169754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:47.541 [2024-12-09 10:25:18.169763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.541 [2024-12-09 10:25:18.169917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:47.541 [2024-12-09 10:25:18.169955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:47.541 [2024-12-09 10:25:18.169985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:47.541 [2024-12-09 10:25:18.169996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.541 [2024-12-09 10:25:18.170028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:47.541 [2024-12-09 10:25:18.170042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:47.541 [2024-12-09 10:25:18.170056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:47.541 [2024-12-09 10:25:18.170066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.541 [2024-12-09 10:25:18.255616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:47.541 [2024-12-09 10:25:18.255683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:47.541 [2024-12-09 10:25:18.255704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:47.541 [2024-12-09 10:25:18.255714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.541 [2024-12-09 10:25:18.325061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:47.541 [2024-12-09 10:25:18.325114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:47.541 [2024-12-09 10:25:18.325150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:47.541 [2024-12-09 10:25:18.325161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.541 [2024-12-09 10:25:18.325283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:47.541 [2024-12-09 10:25:18.325302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:47.541 [2024-12-09 10:25:18.325327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:47.541 [2024-12-09 10:25:18.325338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.541 [2024-12-09 10:25:18.325431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:47.541 [2024-12-09 10:25:18.325448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:47.541 [2024-12-09 10:25:18.325462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:47.541 [2024-12-09 10:25:18.325472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.541 [2024-12-09 10:25:18.325594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:47.541 [2024-12-09 10:25:18.325611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:47.541 [2024-12-09 10:25:18.325624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:47.541 [2024-12-09 10:25:18.325637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.541 [2024-12-09 10:25:18.325689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:47.541 [2024-12-09 10:25:18.325705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:47.541 [2024-12-09 10:25:18.325718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:47.541 [2024-12-09 10:25:18.325728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.541 [2024-12-09 10:25:18.325779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:47.541 [2024-12-09 10:25:18.325793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:47.541 [2024-12-09 10:25:18.325807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:47.541 [2024-12-09 10:25:18.325819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.541 [2024-12-09 10:25:18.325935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:47.541 [2024-12-09 10:25:18.325953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:47.541 [2024-12-09 10:25:18.325967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:47.541 [2024-12-09 10:25:18.325977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.541 [2024-12-09 10:25:18.326200] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 393.383 ms, result 0 00:33:47.541 true 00:33:47.800 10:25:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 81851 00:33:47.800 10:25:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid81851 00:33:47.800 10:25:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:33:47.800 [2024-12-09 10:25:18.468701] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:33:47.800 [2024-12-09 10:25:18.469189] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82808 ] 00:33:48.058 [2024-12-09 10:25:18.651527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:48.058 [2024-12-09 10:25:18.757101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:49.434  [2024-12-09T10:25:21.167Z] Copying: 208/1024 [MB] (208 MBps) [2024-12-09T10:25:22.103Z] Copying: 413/1024 [MB] (204 MBps) [2024-12-09T10:25:23.477Z] Copying: 609/1024 [MB] (195 MBps) [2024-12-09T10:25:24.412Z] Copying: 808/1024 [MB] (198 MBps) [2024-12-09T10:25:24.412Z] Copying: 1007/1024 [MB] (199 MBps) [2024-12-09T10:25:25.346Z] Copying: 1024/1024 [MB] (average 200 MBps) 00:33:54.549 00:33:54.549 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 81851 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:33:54.549 10:25:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:33:54.549 [2024-12-09 10:25:25.273197] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:33:54.549 [2024-12-09 10:25:25.273395] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82878 ] 00:33:54.807 [2024-12-09 10:25:25.452541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:54.807 [2024-12-09 10:25:25.569418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:55.375 [2024-12-09 10:25:25.904590] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:55.375 [2024-12-09 10:25:25.904694] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:55.375 [2024-12-09 10:25:25.970947] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:33:55.375 [2024-12-09 10:25:25.971372] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:33:55.375 [2024-12-09 10:25:25.971639] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:33:55.635 [2024-12-09 10:25:26.254118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:55.635 [2024-12-09 10:25:26.254181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:55.635 [2024-12-09 10:25:26.254217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:55.635 [2024-12-09 10:25:26.254234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:55.635 [2024-12-09 10:25:26.254295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:55.635 [2024-12-09 10:25:26.254312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:55.635 [2024-12-09 10:25:26.254323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:33:55.635 [2024-12-09 10:25:26.254333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:55.635 [2024-12-09 10:25:26.254361] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:55.635 [2024-12-09 10:25:26.255407] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:55.635 [2024-12-09 10:25:26.255439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:55.635 [2024-12-09 10:25:26.255451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:55.635 [2024-12-09 10:25:26.255464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.085 ms 00:33:55.635 [2024-12-09 10:25:26.255474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:55.635 [2024-12-09 10:25:26.257604] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:33:55.635 [2024-12-09 10:25:26.271860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:55.635 [2024-12-09 10:25:26.271904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:33:55.635 [2024-12-09 10:25:26.271936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.257 ms 00:33:55.635 [2024-12-09 10:25:26.271946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:55.635 [2024-12-09 10:25:26.272013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:55.635 [2024-12-09 10:25:26.272032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:33:55.635 [2024-12-09 10:25:26.272044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:33:55.635 [2024-12-09 10:25:26.272053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:55.635 [2024-12-09 10:25:26.282375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:55.635 [2024-12-09 10:25:26.282419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:55.635 [2024-12-09 10:25:26.282450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.236 ms 00:33:55.635 [2024-12-09 10:25:26.282461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:55.635 [2024-12-09 10:25:26.282563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:55.635 [2024-12-09 10:25:26.282604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:55.635 [2024-12-09 10:25:26.282618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:33:55.635 [2024-12-09 10:25:26.282629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:55.635 [2024-12-09 10:25:26.282713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:55.635 [2024-12-09 10:25:26.282730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:55.635 [2024-12-09 10:25:26.282741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:33:55.635 [2024-12-09 10:25:26.282752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:55.635 [2024-12-09 10:25:26.282785] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:55.635 [2024-12-09 10:25:26.287463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:55.635 [2024-12-09 10:25:26.287664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:55.635 [2024-12-09 10:25:26.287691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.687 ms 00:33:55.635 [2024-12-09 10:25:26.287702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:55.635 [2024-12-09 10:25:26.287751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:55.635 [2024-12-09 10:25:26.287768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:55.635 [2024-12-09 10:25:26.287780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:33:55.635 [2024-12-09 10:25:26.287790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:55.635 [2024-12-09 10:25:26.287858] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:33:55.635 [2024-12-09 10:25:26.287895] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:33:55.635 [2024-12-09 10:25:26.287937] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:33:55.635 [2024-12-09 10:25:26.287972] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:33:55.635 [2024-12-09 10:25:26.288069] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:55.635 [2024-12-09 10:25:26.288084] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:55.635 [2024-12-09 10:25:26.288097] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:33:55.635 [2024-12-09 10:25:26.288115] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:55.635 [2024-12-09 10:25:26.288127] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:55.635 [2024-12-09 10:25:26.288138] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:33:55.635 [2024-12-09 10:25:26.288148] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:55.635 [2024-12-09 10:25:26.288159] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:55.635 [2024-12-09 10:25:26.288169] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:55.635 [2024-12-09 10:25:26.288179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:55.635 [2024-12-09 10:25:26.288204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:55.635 [2024-12-09 10:25:26.288215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.325 ms 00:33:55.635 [2024-12-09 10:25:26.288224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:55.635 [2024-12-09 10:25:26.288311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:55.635 [2024-12-09 10:25:26.288327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:55.635 [2024-12-09 10:25:26.288338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:33:55.635 [2024-12-09 10:25:26.288348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:55.635 [2024-12-09 10:25:26.288454] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:55.635 [2024-12-09 10:25:26.288473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:55.635 [2024-12-09 10:25:26.288484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:55.635 [2024-12-09 10:25:26.288495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:55.635 [2024-12-09 10:25:26.288505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:55.635 [2024-12-09 10:25:26.288514] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:55.635 [2024-12-09 10:25:26.288524] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:33:55.635 [2024-12-09 10:25:26.288533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:55.635 [2024-12-09 10:25:26.288543] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:33:55.635 [2024-12-09 10:25:26.288565] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:55.635 [2024-12-09 10:25:26.288574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:55.635 [2024-12-09 10:25:26.288584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:33:55.635 [2024-12-09 10:25:26.288593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:55.635 [2024-12-09 10:25:26.288603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:55.635 [2024-12-09 10:25:26.288614] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:33:55.635 [2024-12-09 10:25:26.288624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:55.635 [2024-12-09 10:25:26.288634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:55.635 [2024-12-09 10:25:26.288643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:33:55.635 [2024-12-09 10:25:26.288652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:55.635 [2024-12-09 10:25:26.288661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:55.636 [2024-12-09 10:25:26.288669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:33:55.636 [2024-12-09 10:25:26.288679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:55.636 [2024-12-09 10:25:26.288688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:55.636 [2024-12-09 10:25:26.288697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:33:55.636 [2024-12-09 10:25:26.288705] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:55.636 [2024-12-09 10:25:26.288714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:55.636 [2024-12-09 10:25:26.288723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:33:55.636 [2024-12-09 10:25:26.288732] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:55.636 [2024-12-09 10:25:26.288740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:55.636 [2024-12-09 10:25:26.288749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:33:55.636 [2024-12-09 10:25:26.288758] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:55.636 [2024-12-09 10:25:26.288767] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:55.636 [2024-12-09 10:25:26.288775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:33:55.636 [2024-12-09 10:25:26.288784] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:55.636 [2024-12-09 10:25:26.288793] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:55.636 [2024-12-09 10:25:26.288807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:33:55.636 [2024-12-09 10:25:26.288816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:55.636 [2024-12-09 10:25:26.288832] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:55.636 [2024-12-09 10:25:26.288850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:33:55.636 [2024-12-09 10:25:26.288873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:55.636 [2024-12-09 10:25:26.288885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:55.636 [2024-12-09 10:25:26.288894] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:33:55.636 [2024-12-09 10:25:26.288910] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:55.636 [2024-12-09 10:25:26.288921] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:55.636 [2024-12-09 10:25:26.288931] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:55.636 [2024-12-09 10:25:26.288949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:55.636 [2024-12-09 10:25:26.288960] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:55.636 [2024-12-09 10:25:26.288971] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:55.636 [2024-12-09 10:25:26.288981] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:55.636 [2024-12-09 10:25:26.288991] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:55.636 [2024-12-09 10:25:26.289001] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:55.636 [2024-12-09 10:25:26.289019] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:55.636 [2024-12-09 10:25:26.289028] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:55.636 [2024-12-09 10:25:26.289040] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:55.636 [2024-12-09 10:25:26.289053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:55.636 [2024-12-09 10:25:26.289064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:33:55.636 [2024-12-09 10:25:26.289075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:33:55.636 [2024-12-09 10:25:26.289085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:33:55.636 [2024-12-09 10:25:26.289095] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:33:55.636 [2024-12-09 10:25:26.289104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:33:55.636 [2024-12-09 10:25:26.289114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:33:55.636 [2024-12-09 10:25:26.289123] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:33:55.636 [2024-12-09 10:25:26.289132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:33:55.636 [2024-12-09 10:25:26.289143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:33:55.636 [2024-12-09 10:25:26.289152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:33:55.636 [2024-12-09 10:25:26.289162] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:33:55.636 [2024-12-09 10:25:26.289179] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:33:55.636 [2024-12-09 10:25:26.289189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:33:55.636 [2024-12-09 10:25:26.289199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:33:55.636 [2024-12-09 10:25:26.289209] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:55.636 [2024-12-09 10:25:26.289220] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:55.636 [2024-12-09 10:25:26.289236] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:55.636 [2024-12-09 10:25:26.289246] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:55.636 [2024-12-09 10:25:26.289256] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:55.636 [2024-12-09 10:25:26.289266] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:55.636 [2024-12-09 10:25:26.289282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:55.636 [2024-12-09 10:25:26.289292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:55.636 [2024-12-09 10:25:26.289303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.885 ms 00:33:55.636 [2024-12-09 10:25:26.289314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:55.636 [2024-12-09 10:25:26.330825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:55.636 [2024-12-09 10:25:26.331260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:55.636 [2024-12-09 10:25:26.331394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.432 ms 00:33:55.636 [2024-12-09 10:25:26.331444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:55.636 [2024-12-09 10:25:26.331683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:55.636 [2024-12-09 10:25:26.331746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:55.636 [2024-12-09 10:25:26.331970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:33:55.636 [2024-12-09 10:25:26.332022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:55.636 [2024-12-09 10:25:26.384451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:55.636 [2024-12-09 10:25:26.384783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:55.636 [2024-12-09 10:25:26.384929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.292 ms 00:33:55.636 [2024-12-09 10:25:26.384977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:55.636 [2024-12-09 10:25:26.385151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:55.636 [2024-12-09 10:25:26.385203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:55.636 [2024-12-09 10:25:26.385324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:55.636 [2024-12-09 10:25:26.385369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:55.636 [2024-12-09 10:25:26.386233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:55.636 [2024-12-09 10:25:26.386363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:55.636 [2024-12-09 10:25:26.386453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.721 ms 00:33:55.636 [2024-12-09 10:25:26.386508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:55.636 [2024-12-09 10:25:26.386816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:55.636 [2024-12-09 10:25:26.386971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:55.636 [2024-12-09 10:25:26.387085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.152 ms 00:33:55.636 [2024-12-09 10:25:26.387128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:55.636 [2024-12-09 10:25:26.407290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:55.636 [2024-12-09 10:25:26.407566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:55.636 [2024-12-09 10:25:26.407680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.971 ms 00:33:55.636 [2024-12-09 10:25:26.407726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:55.637 [2024-12-09 10:25:26.424234] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:33:55.637 [2024-12-09 10:25:26.424483] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:33:55.637 [2024-12-09 10:25:26.424620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:55.637 [2024-12-09 10:25:26.424661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:33:55.637 [2024-12-09 10:25:26.424699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.663 ms 00:33:55.637 [2024-12-09 10:25:26.424859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:55.896 [2024-12-09 10:25:26.452372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:55.896 [2024-12-09 10:25:26.452430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:33:55.896 [2024-12-09 10:25:26.452480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.404 ms 00:33:55.896 [2024-12-09 10:25:26.452493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:55.896 [2024-12-09 10:25:26.468998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:55.896 [2024-12-09 10:25:26.469049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:33:55.896 [2024-12-09 10:25:26.469067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.436 ms 00:33:55.896 [2024-12-09 10:25:26.469079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:55.896 [2024-12-09 10:25:26.482857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:55.896 [2024-12-09 10:25:26.482891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:33:55.896 [2024-12-09 10:25:26.482905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.727 ms 00:33:55.896 [2024-12-09 10:25:26.482915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:55.896 [2024-12-09 10:25:26.483733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:55.896 [2024-12-09 10:25:26.483758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:55.896 [2024-12-09 10:25:26.483772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.711 ms 00:33:55.896 [2024-12-09 10:25:26.483783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:55.896 [2024-12-09 10:25:26.562262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:55.896 [2024-12-09 10:25:26.562603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:33:55.896 [2024-12-09 10:25:26.562726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.455 ms 00:33:55.896 [2024-12-09 10:25:26.562777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:55.896 [2024-12-09 10:25:26.577119] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:33:55.896 [2024-12-09 10:25:26.582769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:55.896 [2024-12-09 10:25:26.582935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:55.896 [2024-12-09 10:25:26.583052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.871 ms 00:33:55.896 [2024-12-09 10:25:26.583186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:55.896 [2024-12-09 10:25:26.583397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:55.896 [2024-12-09 10:25:26.583433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:33:55.896 [2024-12-09 10:25:26.583450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:33:55.896 [2024-12-09 10:25:26.583462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:55.896 [2024-12-09 10:25:26.583577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:55.896 [2024-12-09 10:25:26.583597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:55.896 [2024-12-09 10:25:26.583610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:33:55.896 [2024-12-09 10:25:26.583622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:55.896 [2024-12-09 10:25:26.583664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:55.896 [2024-12-09 10:25:26.583680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:55.896 [2024-12-09 10:25:26.583694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:33:55.896 [2024-12-09 10:25:26.583706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:55.896 [2024-12-09 10:25:26.583755] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:33:55.896 [2024-12-09 10:25:26.583774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:55.896 [2024-12-09 10:25:26.583786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:33:55.896 [2024-12-09 10:25:26.583798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:33:55.896 [2024-12-09 10:25:26.583816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:55.896 [2024-12-09 10:25:26.614552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:55.896 [2024-12-09 10:25:26.614650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:55.896 [2024-12-09 10:25:26.614672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.691 ms 00:33:55.896 [2024-12-09 10:25:26.614684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:55.896 [2024-12-09 10:25:26.614789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:55.896 [2024-12-09 10:25:26.614808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:55.896 [2024-12-09 10:25:26.614822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:33:55.896 [2024-12-09 10:25:26.614847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:55.896 [2024-12-09 10:25:26.616463] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 361.703 ms, result 0 00:33:56.831  [2024-12-09T10:25:29.025Z] Copying: 21/1024 [MB] (21 MBps) [2024-12-09T10:25:29.985Z] Copying: 43/1024 [MB] (22 MBps) [2024-12-09T10:25:30.921Z] Copying: 65/1024 [MB] (21 MBps) [2024-12-09T10:25:31.858Z] Copying: 87/1024 [MB] (21 MBps) [2024-12-09T10:25:32.793Z] Copying: 109/1024 [MB] (22 MBps) [2024-12-09T10:25:33.728Z] Copying: 132/1024 [MB] (22 MBps) [2024-12-09T10:25:34.664Z] Copying: 154/1024 [MB] (22 MBps) [2024-12-09T10:25:36.039Z] Copying: 176/1024 [MB] (22 MBps) [2024-12-09T10:25:36.976Z] Copying: 199/1024 [MB] (22 MBps) [2024-12-09T10:25:37.912Z] Copying: 221/1024 [MB] (22 MBps) [2024-12-09T10:25:38.848Z] Copying: 244/1024 [MB] (22 MBps) [2024-12-09T10:25:39.784Z] Copying: 267/1024 [MB] (22 MBps) [2024-12-09T10:25:40.778Z] Copying: 289/1024 [MB] (22 MBps) [2024-12-09T10:25:41.711Z] Copying: 312/1024 [MB] (22 MBps) [2024-12-09T10:25:42.646Z] Copying: 335/1024 [MB] (22 MBps) [2024-12-09T10:25:44.023Z] Copying: 357/1024 [MB] (22 MBps) [2024-12-09T10:25:44.960Z] Copying: 379/1024 [MB] (22 MBps) [2024-12-09T10:25:45.896Z] Copying: 401/1024 [MB] (22 MBps) [2024-12-09T10:25:46.833Z] Copying: 423/1024 [MB] (21 MBps) [2024-12-09T10:25:47.770Z] Copying: 446/1024 [MB] (22 MBps) [2024-12-09T10:25:48.706Z] Copying: 469/1024 [MB] (23 MBps) [2024-12-09T10:25:49.642Z] Copying: 492/1024 [MB] (23 MBps) [2024-12-09T10:25:51.020Z] Copying: 515/1024 [MB] (22 MBps) [2024-12-09T10:25:51.955Z] Copying: 537/1024 [MB] (22 MBps) [2024-12-09T10:25:52.892Z] Copying: 560/1024 [MB] (22 MBps) [2024-12-09T10:25:53.829Z] Copying: 583/1024 [MB] (22 MBps) [2024-12-09T10:25:54.765Z] Copying: 605/1024 [MB] (22 MBps) [2024-12-09T10:25:55.700Z] Copying: 628/1024 [MB] (22 MBps) [2024-12-09T10:25:56.636Z] Copying: 651/1024 [MB] (22 MBps) [2024-12-09T10:25:58.012Z] Copying: 674/1024 [MB] (22 MBps) [2024-12-09T10:25:58.947Z] Copying: 696/1024 [MB] (22 MBps) [2024-12-09T10:25:59.883Z] Copying: 719/1024 [MB] (22 MBps) [2024-12-09T10:26:00.857Z] Copying: 741/1024 [MB] (22 MBps) [2024-12-09T10:26:01.792Z] Copying: 764/1024 [MB] (22 MBps) [2024-12-09T10:26:02.728Z] Copying: 786/1024 [MB] (22 MBps) [2024-12-09T10:26:03.665Z] Copying: 808/1024 [MB] (21 MBps) [2024-12-09T10:26:05.042Z] Copying: 830/1024 [MB] (22 MBps) [2024-12-09T10:26:05.977Z] Copying: 853/1024 [MB] (22 MBps) [2024-12-09T10:26:06.913Z] Copying: 876/1024 [MB] (23 MBps) [2024-12-09T10:26:07.849Z] Copying: 899/1024 [MB] (23 MBps) [2024-12-09T10:26:08.784Z] Copying: 923/1024 [MB] (23 MBps) [2024-12-09T10:26:09.720Z] Copying: 946/1024 [MB] (22 MBps) [2024-12-09T10:26:10.656Z] Copying: 969/1024 [MB] (23 MBps) [2024-12-09T10:26:12.034Z] Copying: 992/1024 [MB] (22 MBps) [2024-12-09T10:26:12.971Z] Copying: 1015/1024 [MB] (22 MBps) [2024-12-09T10:26:13.230Z] Copying: 1048096/1048576 [kB] (8380 kBps) [2024-12-09T10:26:13.230Z] Copying: 1024/1024 [MB] (average 21 MBps)[2024-12-09 10:26:13.219729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.433 [2024-12-09 10:26:13.219810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:34:42.433 [2024-12-09 10:26:13.219850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:34:42.433 [2024-12-09 10:26:13.219865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.433 [2024-12-09 10:26:13.222931] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:42.433 [2024-12-09 10:26:13.227774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.433 [2024-12-09 10:26:13.227819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:34:42.433 [2024-12-09 10:26:13.227849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.788 ms 00:34:42.433 [2024-12-09 10:26:13.227874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.692 [2024-12-09 10:26:13.241091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.692 [2024-12-09 10:26:13.241148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:34:42.692 [2024-12-09 10:26:13.241168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.979 ms 00:34:42.692 [2024-12-09 10:26:13.241181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.692 [2024-12-09 10:26:13.264217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.692 [2024-12-09 10:26:13.264279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:34:42.692 [2024-12-09 10:26:13.264299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.009 ms 00:34:42.692 [2024-12-09 10:26:13.264312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.692 [2024-12-09 10:26:13.270804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.692 [2024-12-09 10:26:13.271092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:34:42.692 [2024-12-09 10:26:13.271122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.436 ms 00:34:42.692 [2024-12-09 10:26:13.271136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.692 [2024-12-09 10:26:13.304356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.692 [2024-12-09 10:26:13.304452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:34:42.692 [2024-12-09 10:26:13.304474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.124 ms 00:34:42.692 [2024-12-09 10:26:13.304487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.692 [2024-12-09 10:26:13.324152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.692 [2024-12-09 10:26:13.324261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:34:42.692 [2024-12-09 10:26:13.324298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.581 ms 00:34:42.692 [2024-12-09 10:26:13.324323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.692 [2024-12-09 10:26:13.447299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.692 [2024-12-09 10:26:13.447403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:34:42.692 [2024-12-09 10:26:13.447427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 122.852 ms 00:34:42.692 [2024-12-09 10:26:13.447441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.692 [2024-12-09 10:26:13.480071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.692 [2024-12-09 10:26:13.480141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:34:42.692 [2024-12-09 10:26:13.480161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.604 ms 00:34:42.692 [2024-12-09 10:26:13.480370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.952 [2024-12-09 10:26:13.511219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.952 [2024-12-09 10:26:13.511310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:34:42.952 [2024-12-09 10:26:13.511333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.785 ms 00:34:42.952 [2024-12-09 10:26:13.511345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.952 [2024-12-09 10:26:13.543324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.952 [2024-12-09 10:26:13.543400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:34:42.952 [2024-12-09 10:26:13.543421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.903 ms 00:34:42.952 [2024-12-09 10:26:13.543433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.952 [2024-12-09 10:26:13.583918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.952 [2024-12-09 10:26:13.583994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:34:42.952 [2024-12-09 10:26:13.584021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.340 ms 00:34:42.952 [2024-12-09 10:26:13.584035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.952 [2024-12-09 10:26:13.584099] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:34:42.952 [2024-12-09 10:26:13.584132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 128768 / 261120 wr_cnt: 1 state: open 00:34:42.952 [2024-12-09 10:26:13.584151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:34:42.952 [2024-12-09 10:26:13.584166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:34:42.952 [2024-12-09 10:26:13.584182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:42.952 [2024-12-09 10:26:13.584197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:42.952 [2024-12-09 10:26:13.584212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:42.952 [2024-12-09 10:26:13.584228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:42.952 [2024-12-09 10:26:13.584243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:42.952 [2024-12-09 10:26:13.584258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:42.952 [2024-12-09 10:26:13.584274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:42.952 [2024-12-09 10:26:13.584289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:42.952 [2024-12-09 10:26:13.584304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:42.952 [2024-12-09 10:26:13.584319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:42.952 [2024-12-09 10:26:13.584334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:42.952 [2024-12-09 10:26:13.584349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:42.952 [2024-12-09 10:26:13.584364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:42.952 [2024-12-09 10:26:13.584379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:42.952 [2024-12-09 10:26:13.584394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:42.952 [2024-12-09 10:26:13.584409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:34:42.952 [2024-12-09 10:26:13.584424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:34:42.952 [2024-12-09 10:26:13.584439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:34:42.952 [2024-12-09 10:26:13.584453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:34:42.952 [2024-12-09 10:26:13.584467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:34:42.952 [2024-12-09 10:26:13.584482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:34:42.952 [2024-12-09 10:26:13.584496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:34:42.952 [2024-12-09 10:26:13.584511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:34:42.952 [2024-12-09 10:26:13.584529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:34:42.952 [2024-12-09 10:26:13.584544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:34:42.952 [2024-12-09 10:26:13.584559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:34:42.952 [2024-12-09 10:26:13.584574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:34:42.952 [2024-12-09 10:26:13.584588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:34:42.952 [2024-12-09 10:26:13.584603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:34:42.952 [2024-12-09 10:26:13.584617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:34:42.952 [2024-12-09 10:26:13.584632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:34:42.952 [2024-12-09 10:26:13.584659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:34:42.952 [2024-12-09 10:26:13.584675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:34:42.952 [2024-12-09 10:26:13.584691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:34:42.952 [2024-12-09 10:26:13.584706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:34:42.952 [2024-12-09 10:26:13.584721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:34:42.952 [2024-12-09 10:26:13.584736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:34:42.952 [2024-12-09 10:26:13.584751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:34:42.952 [2024-12-09 10:26:13.584766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:34:42.952 [2024-12-09 10:26:13.584781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:34:42.952 [2024-12-09 10:26:13.584796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:34:42.952 [2024-12-09 10:26:13.584811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:34:42.952 [2024-12-09 10:26:13.584849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:34:42.952 [2024-12-09 10:26:13.584868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:34:42.952 [2024-12-09 10:26:13.584884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:34:42.952 [2024-12-09 10:26:13.584900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:34:42.952 [2024-12-09 10:26:13.584915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:34:42.952 [2024-12-09 10:26:13.584930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:34:42.952 [2024-12-09 10:26:13.584945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:34:42.953 [2024-12-09 10:26:13.584961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:34:42.953 [2024-12-09 10:26:13.584976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:34:42.953 [2024-12-09 10:26:13.584991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:34:42.953 [2024-12-09 10:26:13.585008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:34:42.953 [2024-12-09 10:26:13.585023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:34:42.953 [2024-12-09 10:26:13.585039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:34:42.953 [2024-12-09 10:26:13.585054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:34:42.953 [2024-12-09 10:26:13.585069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:34:42.953 [2024-12-09 10:26:13.585085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:34:42.953 [2024-12-09 10:26:13.585099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:34:42.953 [2024-12-09 10:26:13.585114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:34:42.953 [2024-12-09 10:26:13.585133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:34:42.953 [2024-12-09 10:26:13.585148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:34:42.953 [2024-12-09 10:26:13.585163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:34:42.953 [2024-12-09 10:26:13.585179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:34:42.953 [2024-12-09 10:26:13.585194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:34:42.953 [2024-12-09 10:26:13.585209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:34:42.953 [2024-12-09 10:26:13.585224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:34:42.953 [2024-12-09 10:26:13.585239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:34:42.953 [2024-12-09 10:26:13.585253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:34:42.953 [2024-12-09 10:26:13.585267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:34:42.953 [2024-12-09 10:26:13.585282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:34:42.953 [2024-12-09 10:26:13.585296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:34:42.953 [2024-12-09 10:26:13.585310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:34:42.953 [2024-12-09 10:26:13.585325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:34:42.953 [2024-12-09 10:26:13.585339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:34:42.953 [2024-12-09 10:26:13.585354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:34:42.953 [2024-12-09 10:26:13.585368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:34:42.953 [2024-12-09 10:26:13.585382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:34:42.953 [2024-12-09 10:26:13.585397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:34:42.953 [2024-12-09 10:26:13.585411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:34:42.953 [2024-12-09 10:26:13.585425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:34:42.953 [2024-12-09 10:26:13.585440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:34:42.953 [2024-12-09 10:26:13.585455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:34:42.953 [2024-12-09 10:26:13.585469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:34:42.953 [2024-12-09 10:26:13.585483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:34:42.953 [2024-12-09 10:26:13.585497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:34:42.953 [2024-12-09 10:26:13.585512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:34:42.953 [2024-12-09 10:26:13.585527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:34:42.953 [2024-12-09 10:26:13.585543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:34:42.953 [2024-12-09 10:26:13.585557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:34:42.953 [2024-12-09 10:26:13.585572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:34:42.953 [2024-12-09 10:26:13.585586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:34:42.953 [2024-12-09 10:26:13.585601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:34:42.953 [2024-12-09 10:26:13.585616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:34:42.953 [2024-12-09 10:26:13.585631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:34:42.953 [2024-12-09 10:26:13.585648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:34:42.953 [2024-12-09 10:26:13.585663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:34:42.953 [2024-12-09 10:26:13.585689] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:34:42.953 [2024-12-09 10:26:13.585705] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 36a3d27b-2eba-4fbc-84d4-d4a6a9f46810 00:34:42.953 [2024-12-09 10:26:13.585743] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 128768 00:34:42.953 [2024-12-09 10:26:13.585757] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 129728 00:34:42.953 [2024-12-09 10:26:13.585771] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 128768 00:34:42.953 [2024-12-09 10:26:13.585787] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0075 00:34:42.953 [2024-12-09 10:26:13.585801] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:34:42.953 [2024-12-09 10:26:13.585816] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:34:42.953 [2024-12-09 10:26:13.585844] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:34:42.953 [2024-12-09 10:26:13.585860] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:34:42.953 [2024-12-09 10:26:13.585873] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:34:42.953 [2024-12-09 10:26:13.585887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.953 [2024-12-09 10:26:13.585902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:34:42.953 [2024-12-09 10:26:13.585917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.791 ms 00:34:42.953 [2024-12-09 10:26:13.585931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.953 [2024-12-09 10:26:13.607566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.953 [2024-12-09 10:26:13.607615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:34:42.953 [2024-12-09 10:26:13.607637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.577 ms 00:34:42.953 [2024-12-09 10:26:13.607652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.953 [2024-12-09 10:26:13.608258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.953 [2024-12-09 10:26:13.608295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:34:42.953 [2024-12-09 10:26:13.608324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.572 ms 00:34:42.953 [2024-12-09 10:26:13.608338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.953 [2024-12-09 10:26:13.664468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:42.953 [2024-12-09 10:26:13.664525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:42.953 [2024-12-09 10:26:13.664545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:42.953 [2024-12-09 10:26:13.664560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.953 [2024-12-09 10:26:13.664649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:42.953 [2024-12-09 10:26:13.664669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:42.953 [2024-12-09 10:26:13.664691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:42.953 [2024-12-09 10:26:13.664704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.953 [2024-12-09 10:26:13.664864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:42.953 [2024-12-09 10:26:13.664893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:42.953 [2024-12-09 10:26:13.664909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:42.953 [2024-12-09 10:26:13.664923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.953 [2024-12-09 10:26:13.664953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:42.953 [2024-12-09 10:26:13.664970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:42.954 [2024-12-09 10:26:13.664985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:42.954 [2024-12-09 10:26:13.664999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:43.212 [2024-12-09 10:26:13.797395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:43.212 [2024-12-09 10:26:13.797480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:43.212 [2024-12-09 10:26:13.797504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:43.212 [2024-12-09 10:26:13.797520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:43.212 [2024-12-09 10:26:13.889933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:43.212 [2024-12-09 10:26:13.889983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:43.212 [2024-12-09 10:26:13.890000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:43.212 [2024-12-09 10:26:13.890017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:43.212 [2024-12-09 10:26:13.890117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:43.212 [2024-12-09 10:26:13.890134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:43.212 [2024-12-09 10:26:13.890145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:43.212 [2024-12-09 10:26:13.890155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:43.212 [2024-12-09 10:26:13.890198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:43.213 [2024-12-09 10:26:13.890213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:43.213 [2024-12-09 10:26:13.890224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:43.213 [2024-12-09 10:26:13.890233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:43.213 [2024-12-09 10:26:13.890361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:43.213 [2024-12-09 10:26:13.890380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:43.213 [2024-12-09 10:26:13.890391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:43.213 [2024-12-09 10:26:13.890401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:43.213 [2024-12-09 10:26:13.890452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:43.213 [2024-12-09 10:26:13.890468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:34:43.213 [2024-12-09 10:26:13.890479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:43.213 [2024-12-09 10:26:13.890489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:43.213 [2024-12-09 10:26:13.890544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:43.213 [2024-12-09 10:26:13.890559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:43.213 [2024-12-09 10:26:13.890570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:43.213 [2024-12-09 10:26:13.890606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:43.213 [2024-12-09 10:26:13.890668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:43.213 [2024-12-09 10:26:13.890684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:43.213 [2024-12-09 10:26:13.890695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:43.213 [2024-12-09 10:26:13.890705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:43.213 [2024-12-09 10:26:13.890894] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 674.417 ms, result 0 00:34:44.589 00:34:44.589 00:34:44.589 10:26:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:34:46.494 10:26:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:34:46.752 [2024-12-09 10:26:17.312982] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:34:46.752 [2024-12-09 10:26:17.313179] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83372 ] 00:34:46.753 [2024-12-09 10:26:17.506806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:47.011 [2024-12-09 10:26:17.649082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:47.270 [2024-12-09 10:26:17.977654] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:47.270 [2024-12-09 10:26:17.978075] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:47.530 [2024-12-09 10:26:18.139352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:47.530 [2024-12-09 10:26:18.139401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:34:47.530 [2024-12-09 10:26:18.139421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:34:47.530 [2024-12-09 10:26:18.139432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:47.530 [2024-12-09 10:26:18.139492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:47.530 [2024-12-09 10:26:18.139512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:47.530 [2024-12-09 10:26:18.139524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:34:47.530 [2024-12-09 10:26:18.139534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:47.530 [2024-12-09 10:26:18.139561] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:34:47.530 [2024-12-09 10:26:18.140426] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:34:47.530 [2024-12-09 10:26:18.140460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:47.530 [2024-12-09 10:26:18.140473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:47.530 [2024-12-09 10:26:18.140486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.905 ms 00:34:47.530 [2024-12-09 10:26:18.140497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:47.530 [2024-12-09 10:26:18.142464] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:34:47.530 [2024-12-09 10:26:18.157014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:47.530 [2024-12-09 10:26:18.157054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:34:47.530 [2024-12-09 10:26:18.157071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.551 ms 00:34:47.530 [2024-12-09 10:26:18.157081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:47.530 [2024-12-09 10:26:18.157174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:47.530 [2024-12-09 10:26:18.157196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:34:47.530 [2024-12-09 10:26:18.157208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:34:47.530 [2024-12-09 10:26:18.157218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:47.530 [2024-12-09 10:26:18.166806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:47.530 [2024-12-09 10:26:18.166875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:47.530 [2024-12-09 10:26:18.166907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.518 ms 00:34:47.530 [2024-12-09 10:26:18.166929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:47.530 [2024-12-09 10:26:18.167043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:47.530 [2024-12-09 10:26:18.167062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:47.530 [2024-12-09 10:26:18.167074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:34:47.530 [2024-12-09 10:26:18.167084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:47.530 [2024-12-09 10:26:18.167190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:47.530 [2024-12-09 10:26:18.167212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:34:47.530 [2024-12-09 10:26:18.167239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:34:47.530 [2024-12-09 10:26:18.167251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:47.530 [2024-12-09 10:26:18.167303] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:34:47.530 [2024-12-09 10:26:18.172033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:47.530 [2024-12-09 10:26:18.172068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:47.530 [2024-12-09 10:26:18.172107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.739 ms 00:34:47.530 [2024-12-09 10:26:18.172118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:47.530 [2024-12-09 10:26:18.172164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:47.530 [2024-12-09 10:26:18.172181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:34:47.530 [2024-12-09 10:26:18.172193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:34:47.530 [2024-12-09 10:26:18.172203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:47.530 [2024-12-09 10:26:18.172283] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:34:47.530 [2024-12-09 10:26:18.172328] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:34:47.530 [2024-12-09 10:26:18.172365] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:34:47.530 [2024-12-09 10:26:18.172390] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:34:47.530 [2024-12-09 10:26:18.172481] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:34:47.530 [2024-12-09 10:26:18.172495] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:34:47.530 [2024-12-09 10:26:18.172508] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:34:47.530 [2024-12-09 10:26:18.172521] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:34:47.530 [2024-12-09 10:26:18.172534] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:34:47.530 [2024-12-09 10:26:18.172545] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:34:47.530 [2024-12-09 10:26:18.172555] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:34:47.530 [2024-12-09 10:26:18.172571] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:34:47.531 [2024-12-09 10:26:18.172582] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:34:47.531 [2024-12-09 10:26:18.172593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:47.531 [2024-12-09 10:26:18.172605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:34:47.531 [2024-12-09 10:26:18.172616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.336 ms 00:34:47.531 [2024-12-09 10:26:18.172625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:47.531 [2024-12-09 10:26:18.172707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:47.531 [2024-12-09 10:26:18.172722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:34:47.531 [2024-12-09 10:26:18.172733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:34:47.531 [2024-12-09 10:26:18.172743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:47.531 [2024-12-09 10:26:18.172883] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:34:47.531 [2024-12-09 10:26:18.172906] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:34:47.531 [2024-12-09 10:26:18.172918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:47.531 [2024-12-09 10:26:18.172929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:47.531 [2024-12-09 10:26:18.172940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:34:47.531 [2024-12-09 10:26:18.172951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:34:47.531 [2024-12-09 10:26:18.172961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:34:47.531 [2024-12-09 10:26:18.172972] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:34:47.531 [2024-12-09 10:26:18.172982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:34:47.531 [2024-12-09 10:26:18.172992] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:47.531 [2024-12-09 10:26:18.173002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:34:47.531 [2024-12-09 10:26:18.173012] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:34:47.531 [2024-12-09 10:26:18.173022] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:47.531 [2024-12-09 10:26:18.173046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:34:47.531 [2024-12-09 10:26:18.173058] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:34:47.531 [2024-12-09 10:26:18.173069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:47.531 [2024-12-09 10:26:18.173079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:34:47.531 [2024-12-09 10:26:18.173090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:34:47.531 [2024-12-09 10:26:18.173100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:47.531 [2024-12-09 10:26:18.173110] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:34:47.531 [2024-12-09 10:26:18.173119] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:34:47.531 [2024-12-09 10:26:18.173130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:47.531 [2024-12-09 10:26:18.173139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:34:47.531 [2024-12-09 10:26:18.173150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:34:47.531 [2024-12-09 10:26:18.173160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:47.531 [2024-12-09 10:26:18.173169] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:34:47.531 [2024-12-09 10:26:18.173180] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:34:47.531 [2024-12-09 10:26:18.173189] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:47.531 [2024-12-09 10:26:18.173213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:34:47.531 [2024-12-09 10:26:18.173222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:34:47.531 [2024-12-09 10:26:18.173232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:47.531 [2024-12-09 10:26:18.173241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:34:47.531 [2024-12-09 10:26:18.173250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:34:47.531 [2024-12-09 10:26:18.173259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:47.531 [2024-12-09 10:26:18.173268] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:34:47.531 [2024-12-09 10:26:18.173277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:34:47.531 [2024-12-09 10:26:18.173286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:47.531 [2024-12-09 10:26:18.173296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:34:47.531 [2024-12-09 10:26:18.173305] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:34:47.531 [2024-12-09 10:26:18.173315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:47.531 [2024-12-09 10:26:18.173324] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:34:47.531 [2024-12-09 10:26:18.173333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:34:47.531 [2024-12-09 10:26:18.173343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:47.531 [2024-12-09 10:26:18.173352] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:34:47.531 [2024-12-09 10:26:18.173363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:34:47.531 [2024-12-09 10:26:18.173372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:47.531 [2024-12-09 10:26:18.173384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:47.531 [2024-12-09 10:26:18.173395] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:34:47.531 [2024-12-09 10:26:18.173405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:34:47.531 [2024-12-09 10:26:18.173415] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:34:47.531 [2024-12-09 10:26:18.173425] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:34:47.531 [2024-12-09 10:26:18.173434] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:34:47.531 [2024-12-09 10:26:18.173444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:34:47.531 [2024-12-09 10:26:18.173455] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:34:47.531 [2024-12-09 10:26:18.173468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:47.531 [2024-12-09 10:26:18.173486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:34:47.531 [2024-12-09 10:26:18.173497] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:34:47.531 [2024-12-09 10:26:18.173507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:34:47.531 [2024-12-09 10:26:18.173517] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:34:47.531 [2024-12-09 10:26:18.173527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:34:47.531 [2024-12-09 10:26:18.173537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:34:47.531 [2024-12-09 10:26:18.173548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:34:47.531 [2024-12-09 10:26:18.173557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:34:47.531 [2024-12-09 10:26:18.173567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:34:47.531 [2024-12-09 10:26:18.173577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:34:47.531 [2024-12-09 10:26:18.173586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:34:47.531 [2024-12-09 10:26:18.173597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:34:47.531 [2024-12-09 10:26:18.173607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:34:47.531 [2024-12-09 10:26:18.173617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:34:47.531 [2024-12-09 10:26:18.173627] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:34:47.531 [2024-12-09 10:26:18.173639] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:47.531 [2024-12-09 10:26:18.173652] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:34:47.531 [2024-12-09 10:26:18.173662] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:34:47.531 [2024-12-09 10:26:18.173673] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:34:47.531 [2024-12-09 10:26:18.173683] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:34:47.532 [2024-12-09 10:26:18.173694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:47.532 [2024-12-09 10:26:18.173705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:34:47.532 [2024-12-09 10:26:18.173716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.902 ms 00:34:47.532 [2024-12-09 10:26:18.173728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:47.532 [2024-12-09 10:26:18.213389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:47.532 [2024-12-09 10:26:18.213628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:47.532 [2024-12-09 10:26:18.213783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.600 ms 00:34:47.532 [2024-12-09 10:26:18.213974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:47.532 [2024-12-09 10:26:18.214137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:47.532 [2024-12-09 10:26:18.214299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:34:47.532 [2024-12-09 10:26:18.214410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:34:47.532 [2024-12-09 10:26:18.214462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:47.532 [2024-12-09 10:26:18.265771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:47.532 [2024-12-09 10:26:18.266012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:47.532 [2024-12-09 10:26:18.266172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.956 ms 00:34:47.532 [2024-12-09 10:26:18.266227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:47.532 [2024-12-09 10:26:18.266489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:47.532 [2024-12-09 10:26:18.266552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:47.532 [2024-12-09 10:26:18.266760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:47.532 [2024-12-09 10:26:18.266817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:47.532 [2024-12-09 10:26:18.267600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:47.532 [2024-12-09 10:26:18.267760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:47.532 [2024-12-09 10:26:18.267911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.574 ms 00:34:47.532 [2024-12-09 10:26:18.268038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:47.532 [2024-12-09 10:26:18.268295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:47.532 [2024-12-09 10:26:18.268358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:47.532 [2024-12-09 10:26:18.268480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.177 ms 00:34:47.532 [2024-12-09 10:26:18.268531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:47.532 [2024-12-09 10:26:18.286292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:47.532 [2024-12-09 10:26:18.286467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:47.532 [2024-12-09 10:26:18.286635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.702 ms 00:34:47.532 [2024-12-09 10:26:18.286702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:47.532 [2024-12-09 10:26:18.301107] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:34:47.532 [2024-12-09 10:26:18.301308] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:34:47.532 [2024-12-09 10:26:18.301469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:47.532 [2024-12-09 10:26:18.301588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:34:47.532 [2024-12-09 10:26:18.301647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.382 ms 00:34:47.532 [2024-12-09 10:26:18.301745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:47.532 [2024-12-09 10:26:18.326155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:47.532 [2024-12-09 10:26:18.326355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:34:47.532 [2024-12-09 10:26:18.326506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.227 ms 00:34:47.532 [2024-12-09 10:26:18.326561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:47.791 [2024-12-09 10:26:18.340027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:47.791 [2024-12-09 10:26:18.340214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:34:47.791 [2024-12-09 10:26:18.340357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.244 ms 00:34:47.791 [2024-12-09 10:26:18.340388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:47.791 [2024-12-09 10:26:18.353084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:47.791 [2024-12-09 10:26:18.353238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:34:47.791 [2024-12-09 10:26:18.353381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.651 ms 00:34:47.791 [2024-12-09 10:26:18.353438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:47.791 [2024-12-09 10:26:18.354237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:47.791 [2024-12-09 10:26:18.354402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:34:47.791 [2024-12-09 10:26:18.354446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.643 ms 00:34:47.791 [2024-12-09 10:26:18.354466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:47.791 [2024-12-09 10:26:18.425554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:47.791 [2024-12-09 10:26:18.425623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:34:47.791 [2024-12-09 10:26:18.425670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.053 ms 00:34:47.791 [2024-12-09 10:26:18.425683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:47.791 [2024-12-09 10:26:18.437501] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:34:47.791 [2024-12-09 10:26:18.441601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:47.791 [2024-12-09 10:26:18.441634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:34:47.791 [2024-12-09 10:26:18.441667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.849 ms 00:34:47.791 [2024-12-09 10:26:18.441678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:47.791 [2024-12-09 10:26:18.441787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:47.791 [2024-12-09 10:26:18.441806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:34:47.791 [2024-12-09 10:26:18.441824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:34:47.791 [2024-12-09 10:26:18.441851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:47.791 [2024-12-09 10:26:18.444140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:47.791 [2024-12-09 10:26:18.444178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:34:47.791 [2024-12-09 10:26:18.444224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.195 ms 00:34:47.791 [2024-12-09 10:26:18.444250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:47.791 [2024-12-09 10:26:18.444288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:47.791 [2024-12-09 10:26:18.444304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:34:47.791 [2024-12-09 10:26:18.444316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:34:47.791 [2024-12-09 10:26:18.444327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:47.791 [2024-12-09 10:26:18.444378] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:34:47.791 [2024-12-09 10:26:18.444394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:47.791 [2024-12-09 10:26:18.444405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:34:47.791 [2024-12-09 10:26:18.444417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:34:47.791 [2024-12-09 10:26:18.444428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:47.791 [2024-12-09 10:26:18.471303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:47.791 [2024-12-09 10:26:18.471500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:34:47.791 [2024-12-09 10:26:18.471548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.834 ms 00:34:47.791 [2024-12-09 10:26:18.471563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:47.791 [2024-12-09 10:26:18.471652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:47.791 [2024-12-09 10:26:18.471672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:34:47.791 [2024-12-09 10:26:18.471685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:34:47.791 [2024-12-09 10:26:18.471697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:47.791 [2024-12-09 10:26:18.477491] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 336.093 ms, result 0 00:34:49.167  [2024-12-09T10:26:20.899Z] Copying: 932/1048576 [kB] (932 kBps) [2024-12-09T10:26:21.834Z] Copying: 4960/1048576 [kB] (4028 kBps) [2024-12-09T10:26:22.770Z] Copying: 27/1024 [MB] (22 MBps) [2024-12-09T10:26:23.706Z] Copying: 55/1024 [MB] (27 MBps) [2024-12-09T10:26:25.083Z] Copying: 82/1024 [MB] (27 MBps) [2024-12-09T10:26:26.019Z] Copying: 110/1024 [MB] (27 MBps) [2024-12-09T10:26:26.955Z] Copying: 138/1024 [MB] (28 MBps) [2024-12-09T10:26:27.890Z] Copying: 167/1024 [MB] (28 MBps) [2024-12-09T10:26:28.826Z] Copying: 194/1024 [MB] (26 MBps) [2024-12-09T10:26:29.763Z] Copying: 221/1024 [MB] (26 MBps) [2024-12-09T10:26:30.699Z] Copying: 248/1024 [MB] (27 MBps) [2024-12-09T10:26:32.076Z] Copying: 275/1024 [MB] (26 MBps) [2024-12-09T10:26:33.037Z] Copying: 302/1024 [MB] (27 MBps) [2024-12-09T10:26:33.973Z] Copying: 329/1024 [MB] (27 MBps) [2024-12-09T10:26:34.909Z] Copying: 357/1024 [MB] (27 MBps) [2024-12-09T10:26:35.844Z] Copying: 384/1024 [MB] (27 MBps) [2024-12-09T10:26:36.781Z] Copying: 411/1024 [MB] (26 MBps) [2024-12-09T10:26:37.718Z] Copying: 439/1024 [MB] (27 MBps) [2024-12-09T10:26:39.094Z] Copying: 466/1024 [MB] (27 MBps) [2024-12-09T10:26:40.029Z] Copying: 494/1024 [MB] (27 MBps) [2024-12-09T10:26:40.965Z] Copying: 521/1024 [MB] (27 MBps) [2024-12-09T10:26:41.901Z] Copying: 549/1024 [MB] (27 MBps) [2024-12-09T10:26:42.836Z] Copying: 577/1024 [MB] (27 MBps) [2024-12-09T10:26:43.772Z] Copying: 604/1024 [MB] (27 MBps) [2024-12-09T10:26:44.707Z] Copying: 631/1024 [MB] (27 MBps) [2024-12-09T10:26:46.088Z] Copying: 659/1024 [MB] (27 MBps) [2024-12-09T10:26:47.024Z] Copying: 686/1024 [MB] (27 MBps) [2024-12-09T10:26:47.959Z] Copying: 714/1024 [MB] (27 MBps) [2024-12-09T10:26:48.895Z] Copying: 741/1024 [MB] (27 MBps) [2024-12-09T10:26:49.832Z] Copying: 769/1024 [MB] (27 MBps) [2024-12-09T10:26:50.767Z] Copying: 796/1024 [MB] (27 MBps) [2024-12-09T10:26:51.702Z] Copying: 825/1024 [MB] (28 MBps) [2024-12-09T10:26:53.078Z] Copying: 853/1024 [MB] (28 MBps) [2024-12-09T10:26:54.014Z] Copying: 881/1024 [MB] (27 MBps) [2024-12-09T10:26:54.949Z] Copying: 909/1024 [MB] (27 MBps) [2024-12-09T10:26:55.885Z] Copying: 936/1024 [MB] (27 MBps) [2024-12-09T10:26:56.820Z] Copying: 964/1024 [MB] (27 MBps) [2024-12-09T10:26:57.756Z] Copying: 991/1024 [MB] (27 MBps) [2024-12-09T10:26:58.014Z] Copying: 1019/1024 [MB] (27 MBps) [2024-12-09T10:26:58.273Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-12-09 10:26:58.074165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:27.476 [2024-12-09 10:26:58.074271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:35:27.476 [2024-12-09 10:26:58.074302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:35:27.476 [2024-12-09 10:26:58.074321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.476 [2024-12-09 10:26:58.074364] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:35:27.476 [2024-12-09 10:26:58.078431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:27.476 [2024-12-09 10:26:58.078630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:35:27.476 [2024-12-09 10:26:58.078753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.037 ms 00:35:27.476 [2024-12-09 10:26:58.078802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.476 [2024-12-09 10:26:58.079139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:27.476 [2024-12-09 10:26:58.079216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:35:27.476 [2024-12-09 10:26:58.079409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:35:27.476 [2024-12-09 10:26:58.079459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.476 [2024-12-09 10:26:58.090060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:27.476 [2024-12-09 10:26:58.090258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:35:27.476 [2024-12-09 10:26:58.090410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.549 ms 00:35:27.477 [2024-12-09 10:26:58.090457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.477 [2024-12-09 10:26:58.096910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:27.477 [2024-12-09 10:26:58.097104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:35:27.477 [2024-12-09 10:26:58.097228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.261 ms 00:35:27.477 [2024-12-09 10:26:58.097274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.477 [2024-12-09 10:26:58.124569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:27.477 [2024-12-09 10:26:58.124738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:35:27.477 [2024-12-09 10:26:58.124895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.200 ms 00:35:27.477 [2024-12-09 10:26:58.124917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.477 [2024-12-09 10:26:58.140677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:27.477 [2024-12-09 10:26:58.140716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:35:27.477 [2024-12-09 10:26:58.140732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.717 ms 00:35:27.477 [2024-12-09 10:26:58.140742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.477 [2024-12-09 10:26:58.142789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:27.477 [2024-12-09 10:26:58.142876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:35:27.477 [2024-12-09 10:26:58.142924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.020 ms 00:35:27.477 [2024-12-09 10:26:58.142944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.477 [2024-12-09 10:26:58.168598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:27.477 [2024-12-09 10:26:58.168634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:35:27.477 [2024-12-09 10:26:58.168649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.633 ms 00:35:27.477 [2024-12-09 10:26:58.168658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.477 [2024-12-09 10:26:58.193515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:27.477 [2024-12-09 10:26:58.193552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:35:27.477 [2024-12-09 10:26:58.193566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.819 ms 00:35:27.477 [2024-12-09 10:26:58.193576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.477 [2024-12-09 10:26:58.219179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:27.477 [2024-12-09 10:26:58.219381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:35:27.477 [2024-12-09 10:26:58.219407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.567 ms 00:35:27.477 [2024-12-09 10:26:58.219419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.477 [2024-12-09 10:26:58.247231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:27.477 [2024-12-09 10:26:58.247279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:35:27.477 [2024-12-09 10:26:58.247311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.725 ms 00:35:27.477 [2024-12-09 10:26:58.247322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.477 [2024-12-09 10:26:58.247361] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:35:27.477 [2024-12-09 10:26:58.247384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:35:27.477 [2024-12-09 10:26:58.247397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:35:27.477 [2024-12-09 10:26:58.247416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:35:27.477 [2024-12-09 10:26:58.247428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:35:27.477 [2024-12-09 10:26:58.247439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:35:27.477 [2024-12-09 10:26:58.247450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:35:27.477 [2024-12-09 10:26:58.247461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:35:27.477 [2024-12-09 10:26:58.247472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:35:27.477 [2024-12-09 10:26:58.247483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:35:27.477 [2024-12-09 10:26:58.247493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:35:27.477 [2024-12-09 10:26:58.247504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:35:27.477 [2024-12-09 10:26:58.247515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:35:27.477 [2024-12-09 10:26:58.247526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:35:27.477 [2024-12-09 10:26:58.247536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:35:27.477 [2024-12-09 10:26:58.247547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:35:27.477 [2024-12-09 10:26:58.247558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:35:27.477 [2024-12-09 10:26:58.247568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:35:27.477 [2024-12-09 10:26:58.247579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:35:27.477 [2024-12-09 10:26:58.247589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:35:27.477 [2024-12-09 10:26:58.247600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:35:27.477 [2024-12-09 10:26:58.247610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:35:27.477 [2024-12-09 10:26:58.247621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:35:27.477 [2024-12-09 10:26:58.247632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:35:27.477 [2024-12-09 10:26:58.247642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:35:27.477 [2024-12-09 10:26:58.247652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:35:27.477 [2024-12-09 10:26:58.247664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:35:27.477 [2024-12-09 10:26:58.247677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:35:27.477 [2024-12-09 10:26:58.247688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:35:27.477 [2024-12-09 10:26:58.247698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:35:27.477 [2024-12-09 10:26:58.247709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:35:27.477 [2024-12-09 10:26:58.247720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:35:27.477 [2024-12-09 10:26:58.247731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:35:27.477 [2024-12-09 10:26:58.247743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:35:27.477 [2024-12-09 10:26:58.247754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:35:27.477 [2024-12-09 10:26:58.247765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:35:27.477 [2024-12-09 10:26:58.247776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:35:27.477 [2024-12-09 10:26:58.247787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:35:27.477 [2024-12-09 10:26:58.247798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:35:27.477 [2024-12-09 10:26:58.247808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:35:27.477 [2024-12-09 10:26:58.247819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:35:27.477 [2024-12-09 10:26:58.247901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:35:27.477 [2024-12-09 10:26:58.247917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:35:27.477 [2024-12-09 10:26:58.247930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:35:27.477 [2024-12-09 10:26:58.247942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:35:27.477 [2024-12-09 10:26:58.247954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:35:27.477 [2024-12-09 10:26:58.247966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:35:27.477 [2024-12-09 10:26:58.247993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:35:27.477 [2024-12-09 10:26:58.248006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:35:27.477 [2024-12-09 10:26:58.248018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:35:27.477 [2024-12-09 10:26:58.248030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:35:27.478 [2024-12-09 10:26:58.248043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:35:27.478 [2024-12-09 10:26:58.248055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:35:27.478 [2024-12-09 10:26:58.248068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:35:27.478 [2024-12-09 10:26:58.248080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:35:27.478 [2024-12-09 10:26:58.248093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:35:27.478 [2024-12-09 10:26:58.248105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:35:27.478 [2024-12-09 10:26:58.248117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:35:27.478 [2024-12-09 10:26:58.248130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:35:27.478 [2024-12-09 10:26:58.248142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:35:27.478 [2024-12-09 10:26:58.248154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:35:27.478 [2024-12-09 10:26:58.248166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:35:27.478 [2024-12-09 10:26:58.248178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:35:27.478 [2024-12-09 10:26:58.248190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:35:27.478 [2024-12-09 10:26:58.248202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:35:27.478 [2024-12-09 10:26:58.248230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:35:27.478 [2024-12-09 10:26:58.248242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:35:27.478 [2024-12-09 10:26:58.248254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:35:27.478 [2024-12-09 10:26:58.248265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:35:27.478 [2024-12-09 10:26:58.248291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:35:27.478 [2024-12-09 10:26:58.248332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:35:27.478 [2024-12-09 10:26:58.248358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:35:27.478 [2024-12-09 10:26:58.248369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:35:27.478 [2024-12-09 10:26:58.248380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:35:27.478 [2024-12-09 10:26:58.248391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:35:27.478 [2024-12-09 10:26:58.248402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:35:27.478 [2024-12-09 10:26:58.248412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:35:27.478 [2024-12-09 10:26:58.248423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:35:27.478 [2024-12-09 10:26:58.248433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:35:27.478 [2024-12-09 10:26:58.248444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:35:27.478 [2024-12-09 10:26:58.248454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:35:27.478 [2024-12-09 10:26:58.248465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:35:27.478 [2024-12-09 10:26:58.248476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:35:27.478 [2024-12-09 10:26:58.248486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:35:27.478 [2024-12-09 10:26:58.248497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:35:27.478 [2024-12-09 10:26:58.248509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:35:27.478 [2024-12-09 10:26:58.248520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:35:27.478 [2024-12-09 10:26:58.248530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:35:27.478 [2024-12-09 10:26:58.248541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:35:27.478 [2024-12-09 10:26:58.248552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:35:27.478 [2024-12-09 10:26:58.248564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:35:27.478 [2024-12-09 10:26:58.248576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:35:27.478 [2024-12-09 10:26:58.248588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:35:27.478 [2024-12-09 10:26:58.248599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:35:27.478 [2024-12-09 10:26:58.248611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:35:27.478 [2024-12-09 10:26:58.248622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:35:27.478 [2024-12-09 10:26:58.248633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:35:27.478 [2024-12-09 10:26:58.248645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:35:27.478 [2024-12-09 10:26:58.248655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:35:27.478 [2024-12-09 10:26:58.248666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:35:27.478 [2024-12-09 10:26:58.248677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:35:27.478 [2024-12-09 10:26:58.248711] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:35:27.478 [2024-12-09 10:26:58.248722] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 36a3d27b-2eba-4fbc-84d4-d4a6a9f46810 00:35:27.478 [2024-12-09 10:26:58.248733] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:35:27.478 [2024-12-09 10:26:58.248743] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 135872 00:35:27.478 [2024-12-09 10:26:58.248759] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 133888 00:35:27.478 [2024-12-09 10:26:58.248771] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0148 00:35:27.478 [2024-12-09 10:26:58.248782] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:35:27.478 [2024-12-09 10:26:58.248820] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:35:27.478 [2024-12-09 10:26:58.248831] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:35:27.478 [2024-12-09 10:26:58.248857] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:35:27.478 [2024-12-09 10:26:58.248883] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:35:27.478 [2024-12-09 10:26:58.248894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:27.478 [2024-12-09 10:26:58.248907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:35:27.478 [2024-12-09 10:26:58.248920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.534 ms 00:35:27.478 [2024-12-09 10:26:58.248932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.478 [2024-12-09 10:26:58.264897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:27.478 [2024-12-09 10:26:58.264932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:35:27.478 [2024-12-09 10:26:58.264964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.926 ms 00:35:27.478 [2024-12-09 10:26:58.264975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.478 [2024-12-09 10:26:58.265433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:27.478 [2024-12-09 10:26:58.265453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:35:27.478 [2024-12-09 10:26:58.265465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.435 ms 00:35:27.478 [2024-12-09 10:26:58.265475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.737 [2024-12-09 10:26:58.307289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:27.737 [2024-12-09 10:26:58.307492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:27.737 [2024-12-09 10:26:58.307519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:27.737 [2024-12-09 10:26:58.307533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.737 [2024-12-09 10:26:58.307603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:27.737 [2024-12-09 10:26:58.307617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:27.737 [2024-12-09 10:26:58.307630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:27.737 [2024-12-09 10:26:58.307640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.737 [2024-12-09 10:26:58.307765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:27.737 [2024-12-09 10:26:58.307790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:27.737 [2024-12-09 10:26:58.307803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:27.737 [2024-12-09 10:26:58.307828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.737 [2024-12-09 10:26:58.307851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:27.737 [2024-12-09 10:26:58.307905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:27.737 [2024-12-09 10:26:58.307918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:27.737 [2024-12-09 10:26:58.307929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.737 [2024-12-09 10:26:58.404399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:27.737 [2024-12-09 10:26:58.404469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:27.737 [2024-12-09 10:26:58.404504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:27.737 [2024-12-09 10:26:58.404516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.737 [2024-12-09 10:26:58.483158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:27.737 [2024-12-09 10:26:58.483390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:27.737 [2024-12-09 10:26:58.483420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:27.737 [2024-12-09 10:26:58.483433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.737 [2024-12-09 10:26:58.483518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:27.737 [2024-12-09 10:26:58.483546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:27.737 [2024-12-09 10:26:58.483560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:27.737 [2024-12-09 10:26:58.483571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.737 [2024-12-09 10:26:58.483649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:27.737 [2024-12-09 10:26:58.483667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:27.737 [2024-12-09 10:26:58.483679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:27.737 [2024-12-09 10:26:58.483705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.737 [2024-12-09 10:26:58.484088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:27.737 [2024-12-09 10:26:58.484109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:27.737 [2024-12-09 10:26:58.484131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:27.737 [2024-12-09 10:26:58.484143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.737 [2024-12-09 10:26:58.484197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:27.737 [2024-12-09 10:26:58.484215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:35:27.737 [2024-12-09 10:26:58.484228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:27.737 [2024-12-09 10:26:58.484239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.737 [2024-12-09 10:26:58.484289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:27.737 [2024-12-09 10:26:58.484319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:27.737 [2024-12-09 10:26:58.484356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:27.737 [2024-12-09 10:26:58.484383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.737 [2024-12-09 10:26:58.484453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:27.737 [2024-12-09 10:26:58.484475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:27.737 [2024-12-09 10:26:58.484488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:27.737 [2024-12-09 10:26:58.484500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:27.737 [2024-12-09 10:26:58.484656] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 410.475 ms, result 0 00:35:28.728 00:35:28.728 00:35:28.728 10:26:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:35:30.630 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:35:30.630 10:27:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:35:30.630 [2024-12-09 10:27:01.296155] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:35:30.630 [2024-12-09 10:27:01.296332] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83794 ] 00:35:30.888 [2024-12-09 10:27:01.474322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:30.888 [2024-12-09 10:27:01.623977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:31.455 [2024-12-09 10:27:01.950256] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:35:31.455 [2024-12-09 10:27:01.950361] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:35:31.455 [2024-12-09 10:27:02.111117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.455 [2024-12-09 10:27:02.111164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:35:31.455 [2024-12-09 10:27:02.111183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:35:31.455 [2024-12-09 10:27:02.111194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.455 [2024-12-09 10:27:02.111251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.455 [2024-12-09 10:27:02.111271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:31.455 [2024-12-09 10:27:02.111282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:35:31.455 [2024-12-09 10:27:02.111291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.456 [2024-12-09 10:27:02.111317] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:35:31.456 [2024-12-09 10:27:02.112085] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:35:31.456 [2024-12-09 10:27:02.112109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.456 [2024-12-09 10:27:02.112120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:31.456 [2024-12-09 10:27:02.112131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.799 ms 00:35:31.456 [2024-12-09 10:27:02.112141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.456 [2024-12-09 10:27:02.114115] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:35:31.456 [2024-12-09 10:27:02.128309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.456 [2024-12-09 10:27:02.128346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:35:31.456 [2024-12-09 10:27:02.128362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.195 ms 00:35:31.456 [2024-12-09 10:27:02.128372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.456 [2024-12-09 10:27:02.128440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.456 [2024-12-09 10:27:02.128458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:35:31.456 [2024-12-09 10:27:02.128468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:35:31.456 [2024-12-09 10:27:02.128478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.456 [2024-12-09 10:27:02.138888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.456 [2024-12-09 10:27:02.138957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:31.456 [2024-12-09 10:27:02.138986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.333 ms 00:35:31.456 [2024-12-09 10:27:02.139003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.456 [2024-12-09 10:27:02.139108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.456 [2024-12-09 10:27:02.139124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:31.456 [2024-12-09 10:27:02.139135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:35:31.456 [2024-12-09 10:27:02.139145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.456 [2024-12-09 10:27:02.139209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.456 [2024-12-09 10:27:02.139224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:35:31.456 [2024-12-09 10:27:02.139235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:35:31.456 [2024-12-09 10:27:02.139245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.456 [2024-12-09 10:27:02.139281] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:35:31.456 [2024-12-09 10:27:02.143828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.456 [2024-12-09 10:27:02.143886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:31.456 [2024-12-09 10:27:02.143922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.554 ms 00:35:31.456 [2024-12-09 10:27:02.143932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.456 [2024-12-09 10:27:02.143971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.456 [2024-12-09 10:27:02.143986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:35:31.456 [2024-12-09 10:27:02.143998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:35:31.456 [2024-12-09 10:27:02.144008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.456 [2024-12-09 10:27:02.144051] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:35:31.456 [2024-12-09 10:27:02.144083] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:35:31.456 [2024-12-09 10:27:02.144129] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:35:31.456 [2024-12-09 10:27:02.144184] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:35:31.456 [2024-12-09 10:27:02.144317] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:35:31.456 [2024-12-09 10:27:02.144332] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:35:31.456 [2024-12-09 10:27:02.144345] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:35:31.456 [2024-12-09 10:27:02.144359] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:35:31.456 [2024-12-09 10:27:02.144371] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:35:31.456 [2024-12-09 10:27:02.144383] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:35:31.456 [2024-12-09 10:27:02.144394] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:35:31.456 [2024-12-09 10:27:02.144409] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:35:31.456 [2024-12-09 10:27:02.144419] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:35:31.456 [2024-12-09 10:27:02.144431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.456 [2024-12-09 10:27:02.144442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:35:31.456 [2024-12-09 10:27:02.144453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.382 ms 00:35:31.456 [2024-12-09 10:27:02.144464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.456 [2024-12-09 10:27:02.144547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.456 [2024-12-09 10:27:02.144855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:35:31.456 [2024-12-09 10:27:02.144880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:35:31.456 [2024-12-09 10:27:02.144891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.456 [2024-12-09 10:27:02.145037] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:35:31.456 [2024-12-09 10:27:02.145057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:35:31.456 [2024-12-09 10:27:02.145069] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:31.456 [2024-12-09 10:27:02.145079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:31.456 [2024-12-09 10:27:02.145091] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:35:31.456 [2024-12-09 10:27:02.145101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:35:31.456 [2024-12-09 10:27:02.145124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:35:31.456 [2024-12-09 10:27:02.145133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:35:31.456 [2024-12-09 10:27:02.145144] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:35:31.456 [2024-12-09 10:27:02.145153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:31.456 [2024-12-09 10:27:02.145163] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:35:31.456 [2024-12-09 10:27:02.145172] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:35:31.456 [2024-12-09 10:27:02.145181] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:31.456 [2024-12-09 10:27:02.145203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:35:31.456 [2024-12-09 10:27:02.145213] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:35:31.456 [2024-12-09 10:27:02.145223] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:31.456 [2024-12-09 10:27:02.145232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:35:31.456 [2024-12-09 10:27:02.145241] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:35:31.456 [2024-12-09 10:27:02.145251] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:31.456 [2024-12-09 10:27:02.145262] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:35:31.456 [2024-12-09 10:27:02.145271] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:35:31.456 [2024-12-09 10:27:02.145281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:31.456 [2024-12-09 10:27:02.145290] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:35:31.456 [2024-12-09 10:27:02.145300] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:35:31.456 [2024-12-09 10:27:02.145309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:31.456 [2024-12-09 10:27:02.145318] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:35:31.456 [2024-12-09 10:27:02.145328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:35:31.456 [2024-12-09 10:27:02.145337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:31.456 [2024-12-09 10:27:02.145346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:35:31.456 [2024-12-09 10:27:02.145355] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:35:31.456 [2024-12-09 10:27:02.145380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:31.456 [2024-12-09 10:27:02.145389] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:35:31.456 [2024-12-09 10:27:02.145413] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:35:31.456 [2024-12-09 10:27:02.145421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:31.456 [2024-12-09 10:27:02.145432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:35:31.456 [2024-12-09 10:27:02.145440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:35:31.456 [2024-12-09 10:27:02.145449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:31.456 [2024-12-09 10:27:02.145458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:35:31.456 [2024-12-09 10:27:02.145466] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:35:31.456 [2024-12-09 10:27:02.145475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:31.456 [2024-12-09 10:27:02.145485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:35:31.456 [2024-12-09 10:27:02.145494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:35:31.456 [2024-12-09 10:27:02.145503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:31.456 [2024-12-09 10:27:02.145512] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:35:31.456 [2024-12-09 10:27:02.145521] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:35:31.456 [2024-12-09 10:27:02.145531] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:31.456 [2024-12-09 10:27:02.145540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:31.456 [2024-12-09 10:27:02.145549] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:35:31.456 [2024-12-09 10:27:02.145559] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:35:31.456 [2024-12-09 10:27:02.145568] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:35:31.456 [2024-12-09 10:27:02.145578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:35:31.456 [2024-12-09 10:27:02.145587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:35:31.456 [2024-12-09 10:27:02.145595] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:35:31.456 [2024-12-09 10:27:02.145606] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:35:31.456 [2024-12-09 10:27:02.145618] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:31.456 [2024-12-09 10:27:02.145634] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:35:31.456 [2024-12-09 10:27:02.145644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:35:31.456 [2024-12-09 10:27:02.145653] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:35:31.456 [2024-12-09 10:27:02.145663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:35:31.456 [2024-12-09 10:27:02.145672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:35:31.456 [2024-12-09 10:27:02.145682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:35:31.456 [2024-12-09 10:27:02.145691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:35:31.456 [2024-12-09 10:27:02.145700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:35:31.456 [2024-12-09 10:27:02.145709] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:35:31.456 [2024-12-09 10:27:02.145718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:35:31.456 [2024-12-09 10:27:02.145728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:35:31.456 [2024-12-09 10:27:02.145738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:35:31.456 [2024-12-09 10:27:02.145747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:35:31.456 [2024-12-09 10:27:02.145756] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:35:31.456 [2024-12-09 10:27:02.145766] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:35:31.456 [2024-12-09 10:27:02.145777] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:31.456 [2024-12-09 10:27:02.145788] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:35:31.456 [2024-12-09 10:27:02.145798] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:35:31.456 [2024-12-09 10:27:02.145808] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:35:31.456 [2024-12-09 10:27:02.145817] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:35:31.456 [2024-12-09 10:27:02.145828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.456 [2024-12-09 10:27:02.145838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:35:31.456 [2024-12-09 10:27:02.145847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.866 ms 00:35:31.456 [2024-12-09 10:27:02.145857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.456 [2024-12-09 10:27:02.185303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.456 [2024-12-09 10:27:02.185546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:31.456 [2024-12-09 10:27:02.185655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.374 ms 00:35:31.456 [2024-12-09 10:27:02.185711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.456 [2024-12-09 10:27:02.185874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.456 [2024-12-09 10:27:02.186049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:35:31.456 [2024-12-09 10:27:02.186101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:35:31.456 [2024-12-09 10:27:02.186137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.456 [2024-12-09 10:27:02.235964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.456 [2024-12-09 10:27:02.236234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:31.456 [2024-12-09 10:27:02.236347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.617 ms 00:35:31.456 [2024-12-09 10:27:02.236395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.456 [2024-12-09 10:27:02.236481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.456 [2024-12-09 10:27:02.236594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:31.456 [2024-12-09 10:27:02.236651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:35:31.456 [2024-12-09 10:27:02.236687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.456 [2024-12-09 10:27:02.237466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.456 [2024-12-09 10:27:02.237625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:31.456 [2024-12-09 10:27:02.237728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.561 ms 00:35:31.456 [2024-12-09 10:27:02.237773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.456 [2024-12-09 10:27:02.238149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.456 [2024-12-09 10:27:02.238335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:31.456 [2024-12-09 10:27:02.238447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.158 ms 00:35:31.456 [2024-12-09 10:27:02.238491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.715 [2024-12-09 10:27:02.259608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.715 [2024-12-09 10:27:02.259820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:31.715 [2024-12-09 10:27:02.259972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.876 ms 00:35:31.715 [2024-12-09 10:27:02.260023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.715 [2024-12-09 10:27:02.276029] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:35:31.715 [2024-12-09 10:27:02.276254] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:35:31.715 [2024-12-09 10:27:02.276398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.715 [2024-12-09 10:27:02.276439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:35:31.715 [2024-12-09 10:27:02.276475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.148 ms 00:35:31.715 [2024-12-09 10:27:02.276509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.715 [2024-12-09 10:27:02.301427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.715 [2024-12-09 10:27:02.301641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:35:31.715 [2024-12-09 10:27:02.301668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.730 ms 00:35:31.715 [2024-12-09 10:27:02.301681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.715 [2024-12-09 10:27:02.314918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.715 [2024-12-09 10:27:02.315083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:35:31.715 [2024-12-09 10:27:02.315126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.166 ms 00:35:31.715 [2024-12-09 10:27:02.315137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.715 [2024-12-09 10:27:02.327516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.715 [2024-12-09 10:27:02.327552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:35:31.715 [2024-12-09 10:27:02.327583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.337 ms 00:35:31.715 [2024-12-09 10:27:02.327593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.715 [2024-12-09 10:27:02.328396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.715 [2024-12-09 10:27:02.328458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:35:31.715 [2024-12-09 10:27:02.328494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.698 ms 00:35:31.715 [2024-12-09 10:27:02.328513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.715 [2024-12-09 10:27:02.398350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.715 [2024-12-09 10:27:02.398637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:35:31.715 [2024-12-09 10:27:02.398680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.810 ms 00:35:31.715 [2024-12-09 10:27:02.398693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.715 [2024-12-09 10:27:02.409343] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:35:31.715 [2024-12-09 10:27:02.412346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.715 [2024-12-09 10:27:02.412378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:35:31.715 [2024-12-09 10:27:02.412409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.574 ms 00:35:31.715 [2024-12-09 10:27:02.412419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.715 [2024-12-09 10:27:02.412524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.715 [2024-12-09 10:27:02.412543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:35:31.715 [2024-12-09 10:27:02.412560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:35:31.715 [2024-12-09 10:27:02.412571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.715 [2024-12-09 10:27:02.414204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.715 [2024-12-09 10:27:02.414359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:35:31.715 [2024-12-09 10:27:02.414467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.582 ms 00:35:31.715 [2024-12-09 10:27:02.414512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.715 [2024-12-09 10:27:02.414606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.715 [2024-12-09 10:27:02.414813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:35:31.715 [2024-12-09 10:27:02.414851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:35:31.715 [2024-12-09 10:27:02.414865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.715 [2024-12-09 10:27:02.414930] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:35:31.715 [2024-12-09 10:27:02.414949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.715 [2024-12-09 10:27:02.414961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:35:31.715 [2024-12-09 10:27:02.414973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:35:31.715 [2024-12-09 10:27:02.414984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.715 [2024-12-09 10:27:02.441156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.715 [2024-12-09 10:27:02.441195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:35:31.715 [2024-12-09 10:27:02.441234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.142 ms 00:35:31.715 [2024-12-09 10:27:02.441245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.715 [2024-12-09 10:27:02.441321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:31.715 [2024-12-09 10:27:02.441338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:35:31.715 [2024-12-09 10:27:02.441349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:35:31.715 [2024-12-09 10:27:02.441359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:31.715 [2024-12-09 10:27:02.443183] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 331.424 ms, result 0 00:35:33.091  [2024-12-09T10:27:04.824Z] Copying: 23/1024 [MB] (23 MBps) [2024-12-09T10:27:05.757Z] Copying: 45/1024 [MB] (22 MBps) [2024-12-09T10:27:06.692Z] Copying: 68/1024 [MB] (22 MBps) [2024-12-09T10:27:07.627Z] Copying: 91/1024 [MB] (22 MBps) [2024-12-09T10:27:09.003Z] Copying: 114/1024 [MB] (22 MBps) [2024-12-09T10:27:09.955Z] Copying: 136/1024 [MB] (22 MBps) [2024-12-09T10:27:10.913Z] Copying: 159/1024 [MB] (22 MBps) [2024-12-09T10:27:11.849Z] Copying: 182/1024 [MB] (22 MBps) [2024-12-09T10:27:12.785Z] Copying: 204/1024 [MB] (22 MBps) [2024-12-09T10:27:13.722Z] Copying: 226/1024 [MB] (22 MBps) [2024-12-09T10:27:14.658Z] Copying: 248/1024 [MB] (21 MBps) [2024-12-09T10:27:16.038Z] Copying: 270/1024 [MB] (22 MBps) [2024-12-09T10:27:16.973Z] Copying: 293/1024 [MB] (22 MBps) [2024-12-09T10:27:17.909Z] Copying: 316/1024 [MB] (22 MBps) [2024-12-09T10:27:18.845Z] Copying: 338/1024 [MB] (22 MBps) [2024-12-09T10:27:19.781Z] Copying: 361/1024 [MB] (22 MBps) [2024-12-09T10:27:20.717Z] Copying: 383/1024 [MB] (22 MBps) [2024-12-09T10:27:21.653Z] Copying: 405/1024 [MB] (21 MBps) [2024-12-09T10:27:23.027Z] Copying: 428/1024 [MB] (22 MBps) [2024-12-09T10:27:23.973Z] Copying: 451/1024 [MB] (22 MBps) [2024-12-09T10:27:24.911Z] Copying: 473/1024 [MB] (22 MBps) [2024-12-09T10:27:25.870Z] Copying: 496/1024 [MB] (22 MBps) [2024-12-09T10:27:26.806Z] Copying: 518/1024 [MB] (22 MBps) [2024-12-09T10:27:27.742Z] Copying: 541/1024 [MB] (22 MBps) [2024-12-09T10:27:28.679Z] Copying: 563/1024 [MB] (22 MBps) [2024-12-09T10:27:30.057Z] Copying: 586/1024 [MB] (22 MBps) [2024-12-09T10:27:30.624Z] Copying: 608/1024 [MB] (22 MBps) [2024-12-09T10:27:31.999Z] Copying: 631/1024 [MB] (22 MBps) [2024-12-09T10:27:32.934Z] Copying: 653/1024 [MB] (22 MBps) [2024-12-09T10:27:33.870Z] Copying: 676/1024 [MB] (22 MBps) [2024-12-09T10:27:34.805Z] Copying: 698/1024 [MB] (22 MBps) [2024-12-09T10:27:35.739Z] Copying: 721/1024 [MB] (22 MBps) [2024-12-09T10:27:36.675Z] Copying: 743/1024 [MB] (22 MBps) [2024-12-09T10:27:38.053Z] Copying: 765/1024 [MB] (22 MBps) [2024-12-09T10:27:38.621Z] Copying: 788/1024 [MB] (22 MBps) [2024-12-09T10:27:40.008Z] Copying: 810/1024 [MB] (22 MBps) [2024-12-09T10:27:40.944Z] Copying: 832/1024 [MB] (22 MBps) [2024-12-09T10:27:41.881Z] Copying: 855/1024 [MB] (22 MBps) [2024-12-09T10:27:42.818Z] Copying: 877/1024 [MB] (22 MBps) [2024-12-09T10:27:43.754Z] Copying: 899/1024 [MB] (22 MBps) [2024-12-09T10:27:44.690Z] Copying: 922/1024 [MB] (22 MBps) [2024-12-09T10:27:45.625Z] Copying: 944/1024 [MB] (22 MBps) [2024-12-09T10:27:47.006Z] Copying: 967/1024 [MB] (22 MBps) [2024-12-09T10:27:47.981Z] Copying: 989/1024 [MB] (22 MBps) [2024-12-09T10:27:48.240Z] Copying: 1012/1024 [MB] (22 MBps) [2024-12-09T10:27:48.500Z] Copying: 1024/1024 [MB] (average 22 MBps)[2024-12-09 10:27:48.271385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:17.703 [2024-12-09 10:27:48.271655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:36:17.703 [2024-12-09 10:27:48.271685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:36:17.703 [2024-12-09 10:27:48.271703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.703 [2024-12-09 10:27:48.271745] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:36:17.703 [2024-12-09 10:27:48.276942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:17.703 [2024-12-09 10:27:48.276997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:36:17.703 [2024-12-09 10:27:48.277017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.166 ms 00:36:17.703 [2024-12-09 10:27:48.277034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.703 [2024-12-09 10:27:48.277384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:17.703 [2024-12-09 10:27:48.277418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:36:17.703 [2024-12-09 10:27:48.277437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:36:17.703 [2024-12-09 10:27:48.277452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.703 [2024-12-09 10:27:48.280888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:17.703 [2024-12-09 10:27:48.280932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:36:17.703 [2024-12-09 10:27:48.280961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.409 ms 00:36:17.703 [2024-12-09 10:27:48.280978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.703 [2024-12-09 10:27:48.286216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:17.703 [2024-12-09 10:27:48.286243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:36:17.703 [2024-12-09 10:27:48.286270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.218 ms 00:36:17.703 [2024-12-09 10:27:48.286280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.703 [2024-12-09 10:27:48.312531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:17.703 [2024-12-09 10:27:48.312567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:36:17.703 [2024-12-09 10:27:48.312597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.190 ms 00:36:17.703 [2024-12-09 10:27:48.312607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.703 [2024-12-09 10:27:48.327640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:17.703 [2024-12-09 10:27:48.327676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:36:17.703 [2024-12-09 10:27:48.327706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.009 ms 00:36:17.703 [2024-12-09 10:27:48.327717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.703 [2024-12-09 10:27:48.329481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:17.703 [2024-12-09 10:27:48.329534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:36:17.703 [2024-12-09 10:27:48.329564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.730 ms 00:36:17.703 [2024-12-09 10:27:48.329575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.703 [2024-12-09 10:27:48.354491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:17.703 [2024-12-09 10:27:48.354525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:36:17.703 [2024-12-09 10:27:48.354554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.882 ms 00:36:17.703 [2024-12-09 10:27:48.354564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.703 [2024-12-09 10:27:48.379043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:17.703 [2024-12-09 10:27:48.379093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:36:17.703 [2024-12-09 10:27:48.379136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.448 ms 00:36:17.703 [2024-12-09 10:27:48.379145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.703 [2024-12-09 10:27:48.403144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:17.703 [2024-12-09 10:27:48.403178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:36:17.703 [2024-12-09 10:27:48.403208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.971 ms 00:36:17.703 [2024-12-09 10:27:48.403217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.703 [2024-12-09 10:27:48.428033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:17.703 [2024-12-09 10:27:48.428067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:36:17.703 [2024-12-09 10:27:48.428097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.769 ms 00:36:17.703 [2024-12-09 10:27:48.428106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.703 [2024-12-09 10:27:48.428128] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:36:17.703 [2024-12-09 10:27:48.428153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:36:17.703 [2024-12-09 10:27:48.428172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:36:17.703 [2024-12-09 10:27:48.428183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:36:17.703 [2024-12-09 10:27:48.428193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:36:17.703 [2024-12-09 10:27:48.428203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:36:17.703 [2024-12-09 10:27:48.428213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:36:17.703 [2024-12-09 10:27:48.428222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:36:17.703 [2024-12-09 10:27:48.428231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:36:17.703 [2024-12-09 10:27:48.428241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:36:17.703 [2024-12-09 10:27:48.428251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:36:17.703 [2024-12-09 10:27:48.428260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:36:17.703 [2024-12-09 10:27:48.428270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:36:17.703 [2024-12-09 10:27:48.428280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:36:17.703 [2024-12-09 10:27:48.428290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:36:17.703 [2024-12-09 10:27:48.428299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:36:17.703 [2024-12-09 10:27:48.428324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:36:17.703 [2024-12-09 10:27:48.428349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:36:17.703 [2024-12-09 10:27:48.428375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:36:17.703 [2024-12-09 10:27:48.428385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:36:17.703 [2024-12-09 10:27:48.428396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:36:17.703 [2024-12-09 10:27:48.428406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:36:17.703 [2024-12-09 10:27:48.428416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:36:17.703 [2024-12-09 10:27:48.428426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.428437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.428448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.428459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.428471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.428481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.428492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.428503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.428513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.428525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.428536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.428547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.428558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.428569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.428579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.428590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.428600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.428611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.428621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.428632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.428643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.428653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.428664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.428675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.428686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.428697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.428707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.428718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.428729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.428739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.428750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.428765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.428776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.428786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.428797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.428807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.428818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.428830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.428856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.428867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.428878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.428890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.428913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.428927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.428938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.428949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.428960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.428971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.428983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.428995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.429005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.429016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.429028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.429039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.429050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.429060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.429071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.429083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.429094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.429105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.429116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.429127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.429138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.429149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.429160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.429171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.429182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.429193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.429205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.429232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.429243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.429254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.429265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.429275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.429287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.429299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.429309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:36:17.704 [2024-12-09 10:27:48.429320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:36:17.705 [2024-12-09 10:27:48.429338] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:36:17.705 [2024-12-09 10:27:48.429348] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 36a3d27b-2eba-4fbc-84d4-d4a6a9f46810 00:36:17.705 [2024-12-09 10:27:48.429360] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:36:17.705 [2024-12-09 10:27:48.429371] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:36:17.705 [2024-12-09 10:27:48.429381] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:36:17.705 [2024-12-09 10:27:48.429392] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:36:17.705 [2024-12-09 10:27:48.429414] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:36:17.705 [2024-12-09 10:27:48.429425] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:36:17.705 [2024-12-09 10:27:48.429435] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:36:17.705 [2024-12-09 10:27:48.429445] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:36:17.705 [2024-12-09 10:27:48.429454] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:36:17.705 [2024-12-09 10:27:48.429464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:17.705 [2024-12-09 10:27:48.429475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:36:17.705 [2024-12-09 10:27:48.429487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.337 ms 00:36:17.705 [2024-12-09 10:27:48.429502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.705 [2024-12-09 10:27:48.444531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:17.705 [2024-12-09 10:27:48.444579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:36:17.705 [2024-12-09 10:27:48.444609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.007 ms 00:36:17.705 [2024-12-09 10:27:48.444619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.705 [2024-12-09 10:27:48.445173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:17.705 [2024-12-09 10:27:48.445253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:36:17.705 [2024-12-09 10:27:48.445292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.517 ms 00:36:17.705 [2024-12-09 10:27:48.445303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.705 [2024-12-09 10:27:48.482214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:17.705 [2024-12-09 10:27:48.482254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:36:17.705 [2024-12-09 10:27:48.482284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:17.705 [2024-12-09 10:27:48.482293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.705 [2024-12-09 10:27:48.482347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:17.705 [2024-12-09 10:27:48.482368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:36:17.705 [2024-12-09 10:27:48.482379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:17.705 [2024-12-09 10:27:48.482388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.705 [2024-12-09 10:27:48.482479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:17.705 [2024-12-09 10:27:48.482526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:36:17.705 [2024-12-09 10:27:48.482554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:17.705 [2024-12-09 10:27:48.482564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.705 [2024-12-09 10:27:48.482613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:17.705 [2024-12-09 10:27:48.482644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:36:17.705 [2024-12-09 10:27:48.482663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:17.705 [2024-12-09 10:27:48.482678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.964 [2024-12-09 10:27:48.570778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:17.964 [2024-12-09 10:27:48.570876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:36:17.964 [2024-12-09 10:27:48.570911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:17.964 [2024-12-09 10:27:48.570922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.964 [2024-12-09 10:27:48.642129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:17.964 [2024-12-09 10:27:48.642186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:36:17.964 [2024-12-09 10:27:48.642217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:17.964 [2024-12-09 10:27:48.642228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.964 [2024-12-09 10:27:48.642302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:17.964 [2024-12-09 10:27:48.642319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:36:17.964 [2024-12-09 10:27:48.642330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:17.964 [2024-12-09 10:27:48.642340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.964 [2024-12-09 10:27:48.642409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:17.964 [2024-12-09 10:27:48.642424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:36:17.964 [2024-12-09 10:27:48.642436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:17.964 [2024-12-09 10:27:48.642452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.964 [2024-12-09 10:27:48.642656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:17.964 [2024-12-09 10:27:48.642681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:36:17.964 [2024-12-09 10:27:48.642693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:17.964 [2024-12-09 10:27:48.642704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.964 [2024-12-09 10:27:48.642754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:17.964 [2024-12-09 10:27:48.642770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:36:17.964 [2024-12-09 10:27:48.642782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:17.964 [2024-12-09 10:27:48.642793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.964 [2024-12-09 10:27:48.642886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:17.964 [2024-12-09 10:27:48.642909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:36:17.964 [2024-12-09 10:27:48.642921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:17.964 [2024-12-09 10:27:48.642933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.964 [2024-12-09 10:27:48.642989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:17.964 [2024-12-09 10:27:48.643006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:36:17.964 [2024-12-09 10:27:48.643019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:17.964 [2024-12-09 10:27:48.643037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.964 [2024-12-09 10:27:48.643205] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 371.791 ms, result 0 00:36:18.900 00:36:18.900 00:36:18.900 10:27:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:36:20.801 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:36:20.801 10:27:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:36:20.801 10:27:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:36:20.801 10:27:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:36:20.801 10:27:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:36:21.060 10:27:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:36:21.060 10:27:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:36:21.060 10:27:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:36:21.060 Process with pid 81851 is not found 00:36:21.060 10:27:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 81851 00:36:21.060 10:27:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 81851 ']' 00:36:21.060 10:27:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 81851 00:36:21.060 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (81851) - No such process 00:36:21.060 10:27:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 81851 is not found' 00:36:21.060 10:27:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:36:21.321 Remove shared memory files 00:36:21.321 10:27:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:36:21.321 10:27:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:36:21.321 10:27:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:36:21.321 10:27:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:36:21.321 10:27:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:36:21.321 10:27:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:36:21.321 10:27:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:36:21.321 00:36:21.321 real 4m5.871s 00:36:21.321 user 4m43.787s 00:36:21.321 sys 0m37.143s 00:36:21.321 10:27:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:21.321 10:27:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:36:21.321 ************************************ 00:36:21.321 END TEST ftl_dirty_shutdown 00:36:21.321 ************************************ 00:36:21.321 10:27:52 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:36:21.321 10:27:52 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:21.321 10:27:52 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:21.321 10:27:52 ftl -- common/autotest_common.sh@10 -- # set +x 00:36:21.321 ************************************ 00:36:21.321 START TEST ftl_upgrade_shutdown 00:36:21.321 ************************************ 00:36:21.321 10:27:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:36:21.580 * Looking for test storage... 00:36:21.580 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:36:21.580 10:27:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:21.580 10:27:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:36:21.580 10:27:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:21.580 10:27:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:21.580 10:27:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:21.580 10:27:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:21.580 10:27:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:21.580 10:27:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:36:21.580 10:27:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:36:21.580 10:27:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:36:21.580 10:27:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:36:21.580 10:27:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:36:21.580 10:27:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:36:21.580 10:27:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:36:21.580 10:27:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:21.580 10:27:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:36:21.580 10:27:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:36:21.580 10:27:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:21.580 10:27:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:21.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:21.581 --rc genhtml_branch_coverage=1 00:36:21.581 --rc genhtml_function_coverage=1 00:36:21.581 --rc genhtml_legend=1 00:36:21.581 --rc geninfo_all_blocks=1 00:36:21.581 --rc geninfo_unexecuted_blocks=1 00:36:21.581 00:36:21.581 ' 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:21.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:21.581 --rc genhtml_branch_coverage=1 00:36:21.581 --rc genhtml_function_coverage=1 00:36:21.581 --rc genhtml_legend=1 00:36:21.581 --rc geninfo_all_blocks=1 00:36:21.581 --rc geninfo_unexecuted_blocks=1 00:36:21.581 00:36:21.581 ' 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:21.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:21.581 --rc genhtml_branch_coverage=1 00:36:21.581 --rc genhtml_function_coverage=1 00:36:21.581 --rc genhtml_legend=1 00:36:21.581 --rc geninfo_all_blocks=1 00:36:21.581 --rc geninfo_unexecuted_blocks=1 00:36:21.581 00:36:21.581 ' 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:21.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:21.581 --rc genhtml_branch_coverage=1 00:36:21.581 --rc genhtml_function_coverage=1 00:36:21.581 --rc genhtml_legend=1 00:36:21.581 --rc geninfo_all_blocks=1 00:36:21.581 --rc geninfo_unexecuted_blocks=1 00:36:21.581 00:36:21.581 ' 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84364 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84364 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84364 ']' 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:21.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:21.581 10:27:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:36:21.581 [2024-12-09 10:27:52.360902] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:36:21.581 [2024-12-09 10:27:52.361912] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84364 ] 00:36:21.839 [2024-12-09 10:27:52.553954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:22.097 [2024-12-09 10:27:52.706463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:23.033 10:27:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:23.033 10:27:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:36:23.033 10:27:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:36:23.033 10:27:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:36:23.033 10:27:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:36:23.033 10:27:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:36:23.033 10:27:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:36:23.033 10:27:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:36:23.033 10:27:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:36:23.033 10:27:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:36:23.033 10:27:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:36:23.034 10:27:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:36:23.034 10:27:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:36:23.034 10:27:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:36:23.034 10:27:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:36:23.034 10:27:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:36:23.034 10:27:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:36:23.034 10:27:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:36:23.034 10:27:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:36:23.034 10:27:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:36:23.034 10:27:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:36:23.034 10:27:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:36:23.034 10:27:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:36:23.293 10:27:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:36:23.293 10:27:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:36:23.293 10:27:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:36:23.293 10:27:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:36:23.293 10:27:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:36:23.293 10:27:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:36:23.293 10:27:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:36:23.293 10:27:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:36:23.293 10:27:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:36:23.293 { 00:36:23.293 "name": "basen1", 00:36:23.293 "aliases": [ 00:36:23.293 "1330d0a9-cef3-4606-a86a-4fd3db065a0b" 00:36:23.293 ], 00:36:23.293 "product_name": "NVMe disk", 00:36:23.293 "block_size": 4096, 00:36:23.293 "num_blocks": 1310720, 00:36:23.293 "uuid": "1330d0a9-cef3-4606-a86a-4fd3db065a0b", 00:36:23.293 "numa_id": -1, 00:36:23.293 "assigned_rate_limits": { 00:36:23.293 "rw_ios_per_sec": 0, 00:36:23.293 "rw_mbytes_per_sec": 0, 00:36:23.293 "r_mbytes_per_sec": 0, 00:36:23.293 "w_mbytes_per_sec": 0 00:36:23.293 }, 00:36:23.293 "claimed": true, 00:36:23.293 "claim_type": "read_many_write_one", 00:36:23.293 "zoned": false, 00:36:23.293 "supported_io_types": { 00:36:23.293 "read": true, 00:36:23.293 "write": true, 00:36:23.293 "unmap": true, 00:36:23.293 "flush": true, 00:36:23.293 "reset": true, 00:36:23.293 "nvme_admin": true, 00:36:23.293 "nvme_io": true, 00:36:23.293 "nvme_io_md": false, 00:36:23.293 "write_zeroes": true, 00:36:23.293 "zcopy": false, 00:36:23.293 "get_zone_info": false, 00:36:23.293 "zone_management": false, 00:36:23.293 "zone_append": false, 00:36:23.293 "compare": true, 00:36:23.293 "compare_and_write": false, 00:36:23.293 "abort": true, 00:36:23.293 "seek_hole": false, 00:36:23.293 "seek_data": false, 00:36:23.293 "copy": true, 00:36:23.293 "nvme_iov_md": false 00:36:23.293 }, 00:36:23.293 "driver_specific": { 00:36:23.293 "nvme": [ 00:36:23.293 { 00:36:23.293 "pci_address": "0000:00:11.0", 00:36:23.293 "trid": { 00:36:23.293 "trtype": "PCIe", 00:36:23.293 "traddr": "0000:00:11.0" 00:36:23.293 }, 00:36:23.293 "ctrlr_data": { 00:36:23.293 "cntlid": 0, 00:36:23.293 "vendor_id": "0x1b36", 00:36:23.293 "model_number": "QEMU NVMe Ctrl", 00:36:23.293 "serial_number": "12341", 00:36:23.293 "firmware_revision": "8.0.0", 00:36:23.293 "subnqn": "nqn.2019-08.org.qemu:12341", 00:36:23.293 "oacs": { 00:36:23.293 "security": 0, 00:36:23.293 "format": 1, 00:36:23.293 "firmware": 0, 00:36:23.293 "ns_manage": 1 00:36:23.293 }, 00:36:23.293 "multi_ctrlr": false, 00:36:23.293 "ana_reporting": false 00:36:23.293 }, 00:36:23.293 "vs": { 00:36:23.293 "nvme_version": "1.4" 00:36:23.293 }, 00:36:23.293 "ns_data": { 00:36:23.293 "id": 1, 00:36:23.293 "can_share": false 00:36:23.293 } 00:36:23.293 } 00:36:23.293 ], 00:36:23.293 "mp_policy": "active_passive" 00:36:23.293 } 00:36:23.293 } 00:36:23.293 ]' 00:36:23.293 10:27:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:36:23.551 10:27:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:36:23.551 10:27:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:36:23.551 10:27:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:36:23.551 10:27:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:36:23.551 10:27:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:36:23.551 10:27:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:36:23.551 10:27:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:36:23.551 10:27:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:36:23.551 10:27:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:36:23.551 10:27:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:36:23.809 10:27:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=64ea8f8d-1847-46f8-8bf6-227b4f515734 00:36:23.809 10:27:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:36:23.809 10:27:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 64ea8f8d-1847-46f8-8bf6-227b4f515734 00:36:24.067 10:27:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:36:24.326 10:27:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=ef105c41-5827-4490-aae6-1ee31243a77b 00:36:24.326 10:27:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u ef105c41-5827-4490-aae6-1ee31243a77b 00:36:24.584 10:27:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=17ed74f7-c7af-48df-a2f8-d01d0d67f18e 00:36:24.584 10:27:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 17ed74f7-c7af-48df-a2f8-d01d0d67f18e ]] 00:36:24.584 10:27:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 17ed74f7-c7af-48df-a2f8-d01d0d67f18e 5120 00:36:24.584 10:27:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:36:24.584 10:27:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:36:24.584 10:27:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=17ed74f7-c7af-48df-a2f8-d01d0d67f18e 00:36:24.584 10:27:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:36:24.584 10:27:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 17ed74f7-c7af-48df-a2f8-d01d0d67f18e 00:36:24.584 10:27:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=17ed74f7-c7af-48df-a2f8-d01d0d67f18e 00:36:24.584 10:27:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:36:24.584 10:27:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:36:24.584 10:27:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:36:24.584 10:27:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 17ed74f7-c7af-48df-a2f8-d01d0d67f18e 00:36:24.584 10:27:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:36:24.584 { 00:36:24.584 "name": "17ed74f7-c7af-48df-a2f8-d01d0d67f18e", 00:36:24.584 "aliases": [ 00:36:24.584 "lvs/basen1p0" 00:36:24.584 ], 00:36:24.584 "product_name": "Logical Volume", 00:36:24.584 "block_size": 4096, 00:36:24.584 "num_blocks": 5242880, 00:36:24.584 "uuid": "17ed74f7-c7af-48df-a2f8-d01d0d67f18e", 00:36:24.584 "assigned_rate_limits": { 00:36:24.584 "rw_ios_per_sec": 0, 00:36:24.584 "rw_mbytes_per_sec": 0, 00:36:24.584 "r_mbytes_per_sec": 0, 00:36:24.584 "w_mbytes_per_sec": 0 00:36:24.584 }, 00:36:24.584 "claimed": false, 00:36:24.584 "zoned": false, 00:36:24.584 "supported_io_types": { 00:36:24.584 "read": true, 00:36:24.584 "write": true, 00:36:24.584 "unmap": true, 00:36:24.584 "flush": false, 00:36:24.584 "reset": true, 00:36:24.584 "nvme_admin": false, 00:36:24.584 "nvme_io": false, 00:36:24.584 "nvme_io_md": false, 00:36:24.584 "write_zeroes": true, 00:36:24.584 "zcopy": false, 00:36:24.584 "get_zone_info": false, 00:36:24.584 "zone_management": false, 00:36:24.584 "zone_append": false, 00:36:24.584 "compare": false, 00:36:24.584 "compare_and_write": false, 00:36:24.584 "abort": false, 00:36:24.584 "seek_hole": true, 00:36:24.584 "seek_data": true, 00:36:24.584 "copy": false, 00:36:24.584 "nvme_iov_md": false 00:36:24.584 }, 00:36:24.584 "driver_specific": { 00:36:24.584 "lvol": { 00:36:24.584 "lvol_store_uuid": "ef105c41-5827-4490-aae6-1ee31243a77b", 00:36:24.584 "base_bdev": "basen1", 00:36:24.584 "thin_provision": true, 00:36:24.584 "num_allocated_clusters": 0, 00:36:24.584 "snapshot": false, 00:36:24.584 "clone": false, 00:36:24.584 "esnap_clone": false 00:36:24.584 } 00:36:24.584 } 00:36:24.584 } 00:36:24.584 ]' 00:36:24.584 10:27:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:36:24.843 10:27:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:36:24.843 10:27:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:36:24.843 10:27:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:36:24.843 10:27:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:36:24.843 10:27:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:36:24.843 10:27:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:36:24.843 10:27:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:36:24.843 10:27:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:36:25.101 10:27:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:36:25.101 10:27:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:36:25.101 10:27:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:36:25.360 10:27:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:36:25.360 10:27:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:36:25.360 10:27:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 17ed74f7-c7af-48df-a2f8-d01d0d67f18e -c cachen1p0 --l2p_dram_limit 2 00:36:25.619 [2024-12-09 10:27:56.244220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:25.619 [2024-12-09 10:27:56.244271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:36:25.619 [2024-12-09 10:27:56.244295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:36:25.619 [2024-12-09 10:27:56.244306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:25.619 [2024-12-09 10:27:56.244377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:25.619 [2024-12-09 10:27:56.244394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:36:25.619 [2024-12-09 10:27:56.244408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 00:36:25.619 [2024-12-09 10:27:56.244419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:25.619 [2024-12-09 10:27:56.244447] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:36:25.619 [2024-12-09 10:27:56.245324] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:36:25.619 [2024-12-09 10:27:56.245365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:25.619 [2024-12-09 10:27:56.245377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:36:25.619 [2024-12-09 10:27:56.245392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.918 ms 00:36:25.619 [2024-12-09 10:27:56.245403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:25.619 [2024-12-09 10:27:56.245531] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 802f82c2-d2ee-43f6-a065-5842431e7a2d 00:36:25.619 [2024-12-09 10:27:56.247455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:25.619 [2024-12-09 10:27:56.247637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:36:25.619 [2024-12-09 10:27:56.247663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:36:25.619 [2024-12-09 10:27:56.247679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:25.619 [2024-12-09 10:27:56.257359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:25.619 [2024-12-09 10:27:56.257409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:36:25.619 [2024-12-09 10:27:56.257424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.625 ms 00:36:25.619 [2024-12-09 10:27:56.257436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:25.619 [2024-12-09 10:27:56.257497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:25.619 [2024-12-09 10:27:56.257517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:36:25.619 [2024-12-09 10:27:56.257528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:36:25.619 [2024-12-09 10:27:56.257543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:25.619 [2024-12-09 10:27:56.257621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:25.619 [2024-12-09 10:27:56.257641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:36:25.619 [2024-12-09 10:27:56.257655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:36:25.619 [2024-12-09 10:27:56.257667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:25.619 [2024-12-09 10:27:56.257697] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:36:25.619 [2024-12-09 10:27:56.262632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:25.619 [2024-12-09 10:27:56.262667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:36:25.619 [2024-12-09 10:27:56.262686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.941 ms 00:36:25.619 [2024-12-09 10:27:56.262697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:25.619 [2024-12-09 10:27:56.262737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:25.619 [2024-12-09 10:27:56.262751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:36:25.619 [2024-12-09 10:27:56.262765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:36:25.619 [2024-12-09 10:27:56.262775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:25.619 [2024-12-09 10:27:56.262820] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:36:25.619 [2024-12-09 10:27:56.262984] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:36:25.619 [2024-12-09 10:27:56.263006] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:36:25.619 [2024-12-09 10:27:56.263034] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:36:25.619 [2024-12-09 10:27:56.263050] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:36:25.619 [2024-12-09 10:27:56.263062] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:36:25.619 [2024-12-09 10:27:56.263075] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:36:25.619 [2024-12-09 10:27:56.263085] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:36:25.619 [2024-12-09 10:27:56.263100] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:36:25.619 [2024-12-09 10:27:56.263109] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:36:25.619 [2024-12-09 10:27:56.263121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:25.619 [2024-12-09 10:27:56.263132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:36:25.619 [2024-12-09 10:27:56.263145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.305 ms 00:36:25.619 [2024-12-09 10:27:56.263154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:25.619 [2024-12-09 10:27:56.263236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:25.619 [2024-12-09 10:27:56.263259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:36:25.619 [2024-12-09 10:27:56.263273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.057 ms 00:36:25.620 [2024-12-09 10:27:56.263282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:25.620 [2024-12-09 10:27:56.263385] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:36:25.620 [2024-12-09 10:27:56.263401] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:36:25.620 [2024-12-09 10:27:56.263414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:36:25.620 [2024-12-09 10:27:56.263425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:25.620 [2024-12-09 10:27:56.263437] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:36:25.620 [2024-12-09 10:27:56.263446] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:36:25.620 [2024-12-09 10:27:56.263457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:36:25.620 [2024-12-09 10:27:56.263466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:36:25.620 [2024-12-09 10:27:56.263477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:36:25.620 [2024-12-09 10:27:56.263486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:25.620 [2024-12-09 10:27:56.263499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:36:25.620 [2024-12-09 10:27:56.263508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:36:25.620 [2024-12-09 10:27:56.263520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:25.620 [2024-12-09 10:27:56.263529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:36:25.620 [2024-12-09 10:27:56.263540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:36:25.620 [2024-12-09 10:27:56.263548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:25.620 [2024-12-09 10:27:56.263564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:36:25.620 [2024-12-09 10:27:56.263574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:36:25.620 [2024-12-09 10:27:56.263585] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:25.620 [2024-12-09 10:27:56.263594] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:36:25.620 [2024-12-09 10:27:56.263605] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:36:25.620 [2024-12-09 10:27:56.263614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:36:25.620 [2024-12-09 10:27:56.263625] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:36:25.620 [2024-12-09 10:27:56.263634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:36:25.620 [2024-12-09 10:27:56.263645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:36:25.620 [2024-12-09 10:27:56.263655] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:36:25.620 [2024-12-09 10:27:56.263665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:36:25.620 [2024-12-09 10:27:56.263674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:36:25.620 [2024-12-09 10:27:56.263685] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:36:25.620 [2024-12-09 10:27:56.263694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:36:25.620 [2024-12-09 10:27:56.263706] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:36:25.620 [2024-12-09 10:27:56.263715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:36:25.620 [2024-12-09 10:27:56.263729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:36:25.620 [2024-12-09 10:27:56.263738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:25.620 [2024-12-09 10:27:56.263750] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:36:25.620 [2024-12-09 10:27:56.263759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:36:25.620 [2024-12-09 10:27:56.263771] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:25.620 [2024-12-09 10:27:56.263780] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:36:25.620 [2024-12-09 10:27:56.263792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:36:25.620 [2024-12-09 10:27:56.263800] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:25.620 [2024-12-09 10:27:56.263812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:36:25.620 [2024-12-09 10:27:56.263821] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:36:25.620 [2024-12-09 10:27:56.263832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:25.620 [2024-12-09 10:27:56.263852] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:36:25.620 [2024-12-09 10:27:56.263867] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:36:25.620 [2024-12-09 10:27:56.263878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:36:25.620 [2024-12-09 10:27:56.263890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:25.620 [2024-12-09 10:27:56.263901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:36:25.620 [2024-12-09 10:27:56.263916] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:36:25.620 [2024-12-09 10:27:56.263926] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:36:25.620 [2024-12-09 10:27:56.263939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:36:25.620 [2024-12-09 10:27:56.263947] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:36:25.620 [2024-12-09 10:27:56.263959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:36:25.620 [2024-12-09 10:27:56.263971] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:36:25.620 [2024-12-09 10:27:56.263990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:36:25.620 [2024-12-09 10:27:56.264002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:36:25.620 [2024-12-09 10:27:56.264015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:36:25.620 [2024-12-09 10:27:56.264025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:36:25.620 [2024-12-09 10:27:56.264038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:36:25.620 [2024-12-09 10:27:56.264048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:36:25.621 [2024-12-09 10:27:56.264061] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:36:25.621 [2024-12-09 10:27:56.264071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:36:25.621 [2024-12-09 10:27:56.264085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:36:25.621 [2024-12-09 10:27:56.264095] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:36:25.621 [2024-12-09 10:27:56.264109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:36:25.621 [2024-12-09 10:27:56.264120] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:36:25.621 [2024-12-09 10:27:56.264132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:36:25.621 [2024-12-09 10:27:56.264142] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:36:25.621 [2024-12-09 10:27:56.264154] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:36:25.621 [2024-12-09 10:27:56.264164] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:36:25.621 [2024-12-09 10:27:56.264178] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:36:25.621 [2024-12-09 10:27:56.264190] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:36:25.621 [2024-12-09 10:27:56.264202] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:36:25.621 [2024-12-09 10:27:56.264212] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:36:25.621 [2024-12-09 10:27:56.264225] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:36:25.621 [2024-12-09 10:27:56.264236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:25.621 [2024-12-09 10:27:56.264248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:36:25.621 [2024-12-09 10:27:56.264259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.909 ms 00:36:25.621 [2024-12-09 10:27:56.264271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:25.621 [2024-12-09 10:27:56.264320] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:36:25.621 [2024-12-09 10:27:56.264340] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:36:28.908 [2024-12-09 10:27:59.221133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:28.908 [2024-12-09 10:27:59.221220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:36:28.908 [2024-12-09 10:27:59.221242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2956.824 ms 00:36:28.908 [2024-12-09 10:27:59.221256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:28.908 [2024-12-09 10:27:59.254183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:28.908 [2024-12-09 10:27:59.254513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:36:28.908 [2024-12-09 10:27:59.254542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.695 ms 00:36:28.908 [2024-12-09 10:27:59.254558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:28.908 [2024-12-09 10:27:59.254700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:28.908 [2024-12-09 10:27:59.254725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:36:28.908 [2024-12-09 10:27:59.254750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:36:28.908 [2024-12-09 10:27:59.254772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:28.908 [2024-12-09 10:27:59.295094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:28.908 [2024-12-09 10:27:59.295320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:36:28.908 [2024-12-09 10:27:59.295348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 40.179 ms 00:36:28.908 [2024-12-09 10:27:59.295364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:28.908 [2024-12-09 10:27:59.295409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:28.908 [2024-12-09 10:27:59.295431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:36:28.908 [2024-12-09 10:27:59.295444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:36:28.908 [2024-12-09 10:27:59.295457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:28.908 [2024-12-09 10:27:59.296129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:28.908 [2024-12-09 10:27:59.296151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:36:28.908 [2024-12-09 10:27:59.296174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.605 ms 00:36:28.908 [2024-12-09 10:27:59.296188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:28.908 [2024-12-09 10:27:59.296272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:28.908 [2024-12-09 10:27:59.296290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:36:28.908 [2024-12-09 10:27:59.296304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:36:28.908 [2024-12-09 10:27:59.296335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:28.908 [2024-12-09 10:27:59.316065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:28.908 [2024-12-09 10:27:59.316108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:36:28.908 [2024-12-09 10:27:59.316123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.706 ms 00:36:28.908 [2024-12-09 10:27:59.316136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:28.908 [2024-12-09 10:27:59.339201] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:36:28.908 [2024-12-09 10:27:59.340572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:28.908 [2024-12-09 10:27:59.340602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:36:28.908 [2024-12-09 10:27:59.340620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.347 ms 00:36:28.908 [2024-12-09 10:27:59.340630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:28.908 [2024-12-09 10:27:59.366376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:28.908 [2024-12-09 10:27:59.366415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:36:28.908 [2024-12-09 10:27:59.366435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.708 ms 00:36:28.908 [2024-12-09 10:27:59.366445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:28.908 [2024-12-09 10:27:59.366540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:28.908 [2024-12-09 10:27:59.366560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:36:28.908 [2024-12-09 10:27:59.366577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:36:28.908 [2024-12-09 10:27:59.366596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:28.908 [2024-12-09 10:27:59.390963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:28.908 [2024-12-09 10:27:59.390999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:36:28.908 [2024-12-09 10:27:59.391019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.288 ms 00:36:28.908 [2024-12-09 10:27:59.391030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:28.909 [2024-12-09 10:27:59.415086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:28.909 [2024-12-09 10:27:59.415121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:36:28.909 [2024-12-09 10:27:59.415139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.008 ms 00:36:28.909 [2024-12-09 10:27:59.415149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:28.909 [2024-12-09 10:27:59.415747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:28.909 [2024-12-09 10:27:59.415769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:36:28.909 [2024-12-09 10:27:59.415815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.557 ms 00:36:28.909 [2024-12-09 10:27:59.415843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:28.909 [2024-12-09 10:27:59.489791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:28.909 [2024-12-09 10:27:59.489846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:36:28.909 [2024-12-09 10:27:59.489885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 73.870 ms 00:36:28.909 [2024-12-09 10:27:59.489896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:28.909 [2024-12-09 10:27:59.516302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:28.909 [2024-12-09 10:27:59.516339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:36:28.909 [2024-12-09 10:27:59.516358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.291 ms 00:36:28.909 [2024-12-09 10:27:59.516368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:28.909 [2024-12-09 10:27:59.540854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:28.909 [2024-12-09 10:27:59.540888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:36:28.909 [2024-12-09 10:27:59.540905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.439 ms 00:36:28.909 [2024-12-09 10:27:59.540915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:28.909 [2024-12-09 10:27:59.565474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:28.909 [2024-12-09 10:27:59.565510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:36:28.909 [2024-12-09 10:27:59.565528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.511 ms 00:36:28.909 [2024-12-09 10:27:59.565538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:28.909 [2024-12-09 10:27:59.565587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:28.909 [2024-12-09 10:27:59.565602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:36:28.909 [2024-12-09 10:27:59.565618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:36:28.909 [2024-12-09 10:27:59.565628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:28.909 [2024-12-09 10:27:59.565721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:28.909 [2024-12-09 10:27:59.565739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:36:28.909 [2024-12-09 10:27:59.565752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:36:28.909 [2024-12-09 10:27:59.565762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:28.909 [2024-12-09 10:27:59.567257] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3322.398 ms, result 0 00:36:28.909 { 00:36:28.909 "name": "ftl", 00:36:28.909 "uuid": "802f82c2-d2ee-43f6-a065-5842431e7a2d" 00:36:28.909 } 00:36:28.909 10:27:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:36:29.171 [2024-12-09 10:27:59.886164] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:29.171 10:27:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:36:29.430 10:28:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:36:29.998 [2024-12-09 10:28:00.486956] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:36:29.998 10:28:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:36:29.998 [2024-12-09 10:28:00.753564] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:29.998 10:28:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:36:30.566 Fill FTL, iteration 1 00:36:30.566 10:28:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:36:30.566 10:28:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:36:30.566 10:28:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:36:30.566 10:28:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:36:30.566 10:28:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:36:30.566 10:28:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:36:30.566 10:28:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:36:30.566 10:28:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:36:30.566 10:28:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:36:30.566 10:28:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:36:30.566 10:28:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:36:30.566 10:28:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:36:30.566 10:28:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:36:30.566 10:28:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:36:30.566 10:28:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:36:30.566 10:28:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:36:30.566 10:28:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=84494 00:36:30.566 10:28:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:36:30.566 10:28:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:36:30.566 10:28:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 84494 /var/tmp/spdk.tgt.sock 00:36:30.566 10:28:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84494 ']' 00:36:30.566 10:28:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:36:30.566 10:28:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:30.566 10:28:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:36:30.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:36:30.566 10:28:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:30.566 10:28:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:36:30.566 [2024-12-09 10:28:01.293197] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:36:30.566 [2024-12-09 10:28:01.293597] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84494 ] 00:36:30.825 [2024-12-09 10:28:01.461289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:30.825 [2024-12-09 10:28:01.571923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:31.761 10:28:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:31.761 10:28:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:36:31.761 10:28:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:36:32.020 ftln1 00:36:32.020 10:28:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:36:32.020 10:28:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:36:32.279 10:28:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:36:32.279 10:28:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 84494 00:36:32.279 10:28:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 84494 ']' 00:36:32.279 10:28:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 84494 00:36:32.279 10:28:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:36:32.279 10:28:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:32.279 10:28:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84494 00:36:32.279 killing process with pid 84494 00:36:32.279 10:28:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:32.279 10:28:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:32.279 10:28:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84494' 00:36:32.279 10:28:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 84494 00:36:32.279 10:28:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 84494 00:36:34.813 10:28:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:36:34.813 10:28:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:36:34.813 [2024-12-09 10:28:05.110813] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:36:34.813 [2024-12-09 10:28:05.111034] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84537 ] 00:36:34.813 [2024-12-09 10:28:05.286681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:34.813 [2024-12-09 10:28:05.402992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:36.191  [2024-12-09T10:28:07.925Z] Copying: 209/1024 [MB] (209 MBps) [2024-12-09T10:28:08.861Z] Copying: 426/1024 [MB] (217 MBps) [2024-12-09T10:28:10.237Z] Copying: 636/1024 [MB] (210 MBps) [2024-12-09T10:28:10.804Z] Copying: 844/1024 [MB] (208 MBps) [2024-12-09T10:28:11.740Z] Copying: 1024/1024 [MB] (average 210 MBps) 00:36:40.943 00:36:41.232 Calculate MD5 checksum, iteration 1 00:36:41.232 10:28:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:36:41.232 10:28:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:36:41.232 10:28:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:36:41.232 10:28:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:36:41.232 10:28:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:36:41.232 10:28:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:36:41.232 10:28:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:36:41.232 10:28:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:36:41.232 [2024-12-09 10:28:11.870109] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:36:41.232 [2024-12-09 10:28:11.870308] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84607 ] 00:36:41.542 [2024-12-09 10:28:12.052094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:41.542 [2024-12-09 10:28:12.172721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:42.918  [2024-12-09T10:28:14.651Z] Copying: 430/1024 [MB] (430 MBps) [2024-12-09T10:28:14.910Z] Copying: 891/1024 [MB] (461 MBps) [2024-12-09T10:28:15.845Z] Copying: 1024/1024 [MB] (average 447 MBps) 00:36:45.048 00:36:45.048 10:28:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:36:45.048 10:28:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:36:46.951 10:28:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:36:46.951 Fill FTL, iteration 2 00:36:46.951 10:28:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=f15f8d3224ba46d7e1965ebc9801bbaf 00:36:46.951 10:28:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:36:46.951 10:28:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:36:46.951 10:28:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:36:46.951 10:28:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:36:46.951 10:28:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:36:46.951 10:28:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:36:46.951 10:28:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:36:46.951 10:28:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:36:46.951 10:28:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:36:47.209 [2024-12-09 10:28:17.840782] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:36:47.209 [2024-12-09 10:28:17.840984] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84673 ] 00:36:47.468 [2024-12-09 10:28:18.033629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:47.468 [2024-12-09 10:28:18.181884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:48.844  [2024-12-09T10:28:21.018Z] Copying: 213/1024 [MB] (213 MBps) [2024-12-09T10:28:21.955Z] Copying: 422/1024 [MB] (209 MBps) [2024-12-09T10:28:22.891Z] Copying: 630/1024 [MB] (208 MBps) [2024-12-09T10:28:23.826Z] Copying: 837/1024 [MB] (207 MBps) [2024-12-09T10:28:24.763Z] Copying: 1024/1024 [MB] (average 208 MBps) 00:36:53.966 00:36:53.966 Calculate MD5 checksum, iteration 2 00:36:53.966 10:28:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:36:53.966 10:28:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:36:53.966 10:28:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:36:53.966 10:28:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:36:53.966 10:28:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:36:53.966 10:28:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:36:53.966 10:28:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:36:53.966 10:28:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:36:53.966 [2024-12-09 10:28:24.723660] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:36:53.966 [2024-12-09 10:28:24.723865] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84738 ] 00:36:54.224 [2024-12-09 10:28:24.906444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:54.482 [2024-12-09 10:28:25.030890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:56.386  [2024-12-09T10:28:27.750Z] Copying: 438/1024 [MB] (438 MBps) [2024-12-09T10:28:28.008Z] Copying: 873/1024 [MB] (435 MBps) [2024-12-09T10:28:29.383Z] Copying: 1024/1024 [MB] (average 438 MBps) 00:36:58.586 00:36:58.845 10:28:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:36:58.845 10:28:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:37:00.749 10:28:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:37:00.749 10:28:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=a24d83cbecb8043da2f316bbb1765311 00:37:00.749 10:28:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:37:00.749 10:28:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:37:00.749 10:28:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:37:01.007 [2024-12-09 10:28:31.670800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:01.007 [2024-12-09 10:28:31.670892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:37:01.007 [2024-12-09 10:28:31.670947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:37:01.007 [2024-12-09 10:28:31.670978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:01.007 [2024-12-09 10:28:31.671017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:01.007 [2024-12-09 10:28:31.671059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:37:01.007 [2024-12-09 10:28:31.671073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:37:01.007 [2024-12-09 10:28:31.671085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:01.007 [2024-12-09 10:28:31.671114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:01.007 [2024-12-09 10:28:31.671128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:37:01.007 [2024-12-09 10:28:31.671140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:37:01.007 [2024-12-09 10:28:31.671152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:01.007 [2024-12-09 10:28:31.671281] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.442 ms, result 0 00:37:01.007 true 00:37:01.007 10:28:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:37:01.266 { 00:37:01.266 "name": "ftl", 00:37:01.266 "properties": [ 00:37:01.266 { 00:37:01.266 "name": "superblock_version", 00:37:01.266 "value": 5, 00:37:01.266 "read-only": true 00:37:01.266 }, 00:37:01.266 { 00:37:01.266 "name": "base_device", 00:37:01.266 "bands": [ 00:37:01.266 { 00:37:01.266 "id": 0, 00:37:01.266 "state": "FREE", 00:37:01.266 "validity": 0.0 00:37:01.266 }, 00:37:01.266 { 00:37:01.266 "id": 1, 00:37:01.266 "state": "FREE", 00:37:01.266 "validity": 0.0 00:37:01.266 }, 00:37:01.266 { 00:37:01.266 "id": 2, 00:37:01.266 "state": "FREE", 00:37:01.266 "validity": 0.0 00:37:01.266 }, 00:37:01.266 { 00:37:01.266 "id": 3, 00:37:01.266 "state": "FREE", 00:37:01.266 "validity": 0.0 00:37:01.266 }, 00:37:01.266 { 00:37:01.266 "id": 4, 00:37:01.266 "state": "FREE", 00:37:01.266 "validity": 0.0 00:37:01.266 }, 00:37:01.266 { 00:37:01.266 "id": 5, 00:37:01.266 "state": "FREE", 00:37:01.266 "validity": 0.0 00:37:01.266 }, 00:37:01.266 { 00:37:01.266 "id": 6, 00:37:01.266 "state": "FREE", 00:37:01.266 "validity": 0.0 00:37:01.266 }, 00:37:01.266 { 00:37:01.266 "id": 7, 00:37:01.266 "state": "FREE", 00:37:01.266 "validity": 0.0 00:37:01.266 }, 00:37:01.266 { 00:37:01.266 "id": 8, 00:37:01.266 "state": "FREE", 00:37:01.266 "validity": 0.0 00:37:01.266 }, 00:37:01.266 { 00:37:01.266 "id": 9, 00:37:01.266 "state": "FREE", 00:37:01.266 "validity": 0.0 00:37:01.266 }, 00:37:01.266 { 00:37:01.266 "id": 10, 00:37:01.266 "state": "FREE", 00:37:01.266 "validity": 0.0 00:37:01.266 }, 00:37:01.266 { 00:37:01.266 "id": 11, 00:37:01.266 "state": "FREE", 00:37:01.266 "validity": 0.0 00:37:01.266 }, 00:37:01.266 { 00:37:01.266 "id": 12, 00:37:01.266 "state": "FREE", 00:37:01.266 "validity": 0.0 00:37:01.266 }, 00:37:01.266 { 00:37:01.266 "id": 13, 00:37:01.266 "state": "FREE", 00:37:01.266 "validity": 0.0 00:37:01.266 }, 00:37:01.266 { 00:37:01.266 "id": 14, 00:37:01.266 "state": "FREE", 00:37:01.266 "validity": 0.0 00:37:01.266 }, 00:37:01.266 { 00:37:01.266 "id": 15, 00:37:01.266 "state": "FREE", 00:37:01.266 "validity": 0.0 00:37:01.266 }, 00:37:01.266 { 00:37:01.266 "id": 16, 00:37:01.266 "state": "FREE", 00:37:01.266 "validity": 0.0 00:37:01.266 }, 00:37:01.266 { 00:37:01.266 "id": 17, 00:37:01.266 "state": "FREE", 00:37:01.266 "validity": 0.0 00:37:01.266 } 00:37:01.266 ], 00:37:01.266 "read-only": true 00:37:01.266 }, 00:37:01.266 { 00:37:01.266 "name": "cache_device", 00:37:01.266 "type": "bdev", 00:37:01.266 "chunks": [ 00:37:01.266 { 00:37:01.266 "id": 0, 00:37:01.266 "state": "INACTIVE", 00:37:01.267 "utilization": 0.0 00:37:01.267 }, 00:37:01.267 { 00:37:01.267 "id": 1, 00:37:01.267 "state": "CLOSED", 00:37:01.267 "utilization": 1.0 00:37:01.267 }, 00:37:01.267 { 00:37:01.267 "id": 2, 00:37:01.267 "state": "CLOSED", 00:37:01.267 "utilization": 1.0 00:37:01.267 }, 00:37:01.267 { 00:37:01.267 "id": 3, 00:37:01.267 "state": "OPEN", 00:37:01.267 "utilization": 0.001953125 00:37:01.267 }, 00:37:01.267 { 00:37:01.267 "id": 4, 00:37:01.267 "state": "OPEN", 00:37:01.267 "utilization": 0.0 00:37:01.267 } 00:37:01.267 ], 00:37:01.267 "read-only": true 00:37:01.267 }, 00:37:01.267 { 00:37:01.267 "name": "verbose_mode", 00:37:01.267 "value": true, 00:37:01.267 "unit": "", 00:37:01.267 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:37:01.267 }, 00:37:01.267 { 00:37:01.267 "name": "prep_upgrade_on_shutdown", 00:37:01.267 "value": false, 00:37:01.267 "unit": "", 00:37:01.267 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:37:01.267 } 00:37:01.267 ] 00:37:01.267 } 00:37:01.267 10:28:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:37:01.525 [2024-12-09 10:28:32.203440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:01.525 [2024-12-09 10:28:32.203697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:37:01.525 [2024-12-09 10:28:32.203728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:37:01.525 [2024-12-09 10:28:32.203741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:01.525 [2024-12-09 10:28:32.203786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:01.525 [2024-12-09 10:28:32.203802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:37:01.525 [2024-12-09 10:28:32.203813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:37:01.525 [2024-12-09 10:28:32.203824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:01.525 [2024-12-09 10:28:32.203886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:01.525 [2024-12-09 10:28:32.203903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:37:01.525 [2024-12-09 10:28:32.203915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:37:01.525 [2024-12-09 10:28:32.203926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:01.525 [2024-12-09 10:28:32.204026] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.545 ms, result 0 00:37:01.525 true 00:37:01.525 10:28:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:37:01.525 10:28:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:37:01.525 10:28:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:37:01.784 10:28:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:37:01.784 10:28:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:37:01.784 10:28:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:37:02.042 [2024-12-09 10:28:32.740006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:02.042 [2024-12-09 10:28:32.740056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:37:02.042 [2024-12-09 10:28:32.740090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:37:02.042 [2024-12-09 10:28:32.740102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:02.042 [2024-12-09 10:28:32.740132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:02.042 [2024-12-09 10:28:32.740146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:37:02.042 [2024-12-09 10:28:32.740157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:37:02.042 [2024-12-09 10:28:32.740166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:02.042 [2024-12-09 10:28:32.740190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:02.042 [2024-12-09 10:28:32.740203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:37:02.042 [2024-12-09 10:28:32.740214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:37:02.042 [2024-12-09 10:28:32.740224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:02.042 [2024-12-09 10:28:32.740293] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.268 ms, result 0 00:37:02.042 true 00:37:02.042 10:28:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:37:02.301 { 00:37:02.301 "name": "ftl", 00:37:02.301 "properties": [ 00:37:02.301 { 00:37:02.301 "name": "superblock_version", 00:37:02.301 "value": 5, 00:37:02.301 "read-only": true 00:37:02.301 }, 00:37:02.301 { 00:37:02.301 "name": "base_device", 00:37:02.301 "bands": [ 00:37:02.301 { 00:37:02.301 "id": 0, 00:37:02.301 "state": "FREE", 00:37:02.301 "validity": 0.0 00:37:02.301 }, 00:37:02.301 { 00:37:02.301 "id": 1, 00:37:02.301 "state": "FREE", 00:37:02.301 "validity": 0.0 00:37:02.301 }, 00:37:02.301 { 00:37:02.301 "id": 2, 00:37:02.301 "state": "FREE", 00:37:02.301 "validity": 0.0 00:37:02.301 }, 00:37:02.301 { 00:37:02.301 "id": 3, 00:37:02.301 "state": "FREE", 00:37:02.301 "validity": 0.0 00:37:02.301 }, 00:37:02.301 { 00:37:02.301 "id": 4, 00:37:02.301 "state": "FREE", 00:37:02.301 "validity": 0.0 00:37:02.301 }, 00:37:02.301 { 00:37:02.301 "id": 5, 00:37:02.301 "state": "FREE", 00:37:02.301 "validity": 0.0 00:37:02.301 }, 00:37:02.301 { 00:37:02.301 "id": 6, 00:37:02.301 "state": "FREE", 00:37:02.301 "validity": 0.0 00:37:02.301 }, 00:37:02.301 { 00:37:02.301 "id": 7, 00:37:02.301 "state": "FREE", 00:37:02.301 "validity": 0.0 00:37:02.301 }, 00:37:02.301 { 00:37:02.301 "id": 8, 00:37:02.301 "state": "FREE", 00:37:02.301 "validity": 0.0 00:37:02.302 }, 00:37:02.302 { 00:37:02.302 "id": 9, 00:37:02.302 "state": "FREE", 00:37:02.302 "validity": 0.0 00:37:02.302 }, 00:37:02.302 { 00:37:02.302 "id": 10, 00:37:02.302 "state": "FREE", 00:37:02.302 "validity": 0.0 00:37:02.302 }, 00:37:02.302 { 00:37:02.302 "id": 11, 00:37:02.302 "state": "FREE", 00:37:02.302 "validity": 0.0 00:37:02.302 }, 00:37:02.302 { 00:37:02.302 "id": 12, 00:37:02.302 "state": "FREE", 00:37:02.302 "validity": 0.0 00:37:02.302 }, 00:37:02.302 { 00:37:02.302 "id": 13, 00:37:02.302 "state": "FREE", 00:37:02.302 "validity": 0.0 00:37:02.302 }, 00:37:02.302 { 00:37:02.302 "id": 14, 00:37:02.302 "state": "FREE", 00:37:02.302 "validity": 0.0 00:37:02.302 }, 00:37:02.302 { 00:37:02.302 "id": 15, 00:37:02.302 "state": "FREE", 00:37:02.302 "validity": 0.0 00:37:02.302 }, 00:37:02.302 { 00:37:02.302 "id": 16, 00:37:02.302 "state": "FREE", 00:37:02.302 "validity": 0.0 00:37:02.302 }, 00:37:02.302 { 00:37:02.302 "id": 17, 00:37:02.302 "state": "FREE", 00:37:02.302 "validity": 0.0 00:37:02.302 } 00:37:02.302 ], 00:37:02.302 "read-only": true 00:37:02.302 }, 00:37:02.302 { 00:37:02.302 "name": "cache_device", 00:37:02.302 "type": "bdev", 00:37:02.302 "chunks": [ 00:37:02.302 { 00:37:02.302 "id": 0, 00:37:02.302 "state": "INACTIVE", 00:37:02.302 "utilization": 0.0 00:37:02.302 }, 00:37:02.302 { 00:37:02.302 "id": 1, 00:37:02.302 "state": "CLOSED", 00:37:02.302 "utilization": 1.0 00:37:02.302 }, 00:37:02.302 { 00:37:02.302 "id": 2, 00:37:02.302 "state": "CLOSED", 00:37:02.302 "utilization": 1.0 00:37:02.302 }, 00:37:02.302 { 00:37:02.302 "id": 3, 00:37:02.302 "state": "OPEN", 00:37:02.302 "utilization": 0.001953125 00:37:02.302 }, 00:37:02.302 { 00:37:02.302 "id": 4, 00:37:02.302 "state": "OPEN", 00:37:02.302 "utilization": 0.0 00:37:02.302 } 00:37:02.302 ], 00:37:02.302 "read-only": true 00:37:02.302 }, 00:37:02.302 { 00:37:02.302 "name": "verbose_mode", 00:37:02.302 "value": true, 00:37:02.302 "unit": "", 00:37:02.302 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:37:02.302 }, 00:37:02.302 { 00:37:02.302 "name": "prep_upgrade_on_shutdown", 00:37:02.302 "value": true, 00:37:02.302 "unit": "", 00:37:02.302 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:37:02.302 } 00:37:02.302 ] 00:37:02.302 } 00:37:02.302 10:28:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:37:02.302 10:28:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 84364 ]] 00:37:02.302 10:28:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 84364 00:37:02.302 10:28:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 84364 ']' 00:37:02.302 10:28:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 84364 00:37:02.302 10:28:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:37:02.302 10:28:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:02.302 10:28:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84364 00:37:02.302 killing process with pid 84364 00:37:02.302 10:28:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:02.302 10:28:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:02.302 10:28:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84364' 00:37:02.302 10:28:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 84364 00:37:02.302 10:28:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 84364 00:37:03.238 [2024-12-09 10:28:33.940363] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:37:03.238 [2024-12-09 10:28:33.958404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:03.238 [2024-12-09 10:28:33.958446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:37:03.238 [2024-12-09 10:28:33.958481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:37:03.238 [2024-12-09 10:28:33.958492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:03.238 [2024-12-09 10:28:33.958520] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:37:03.238 [2024-12-09 10:28:33.961949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:03.238 [2024-12-09 10:28:33.961982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:37:03.238 [2024-12-09 10:28:33.962012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.410 ms 00:37:03.238 [2024-12-09 10:28:33.962028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:13.214 [2024-12-09 10:28:42.514492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:13.214 [2024-12-09 10:28:42.514577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:37:13.214 [2024-12-09 10:28:42.514657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8552.470 ms 00:37:13.214 [2024-12-09 10:28:42.514675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:13.214 [2024-12-09 10:28:42.515975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:13.214 [2024-12-09 10:28:42.516006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:37:13.214 [2024-12-09 10:28:42.516020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.258 ms 00:37:13.214 [2024-12-09 10:28:42.516032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:13.214 [2024-12-09 10:28:42.517211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:13.214 [2024-12-09 10:28:42.517241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:37:13.214 [2024-12-09 10:28:42.517271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.142 ms 00:37:13.214 [2024-12-09 10:28:42.517289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:13.214 [2024-12-09 10:28:42.530269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:13.214 [2024-12-09 10:28:42.530308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:37:13.214 [2024-12-09 10:28:42.530340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.895 ms 00:37:13.214 [2024-12-09 10:28:42.530350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:13.214 [2024-12-09 10:28:42.537484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:13.214 [2024-12-09 10:28:42.537670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:37:13.214 [2024-12-09 10:28:42.537712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.077 ms 00:37:13.214 [2024-12-09 10:28:42.537724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:13.214 [2024-12-09 10:28:42.537860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:13.214 [2024-12-09 10:28:42.537898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:37:13.214 [2024-12-09 10:28:42.537919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.057 ms 00:37:13.214 [2024-12-09 10:28:42.537931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:13.214 [2024-12-09 10:28:42.548810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:13.214 [2024-12-09 10:28:42.548872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:37:13.214 [2024-12-09 10:28:42.548905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.856 ms 00:37:13.214 [2024-12-09 10:28:42.548914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:13.214 [2024-12-09 10:28:42.559870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:13.214 [2024-12-09 10:28:42.560089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:37:13.214 [2024-12-09 10:28:42.560115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.917 ms 00:37:13.214 [2024-12-09 10:28:42.560127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:13.214 [2024-12-09 10:28:42.572961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:13.214 [2024-12-09 10:28:42.573010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:37:13.214 [2024-12-09 10:28:42.573042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.775 ms 00:37:13.214 [2024-12-09 10:28:42.573052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:13.214 [2024-12-09 10:28:42.583950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:13.214 [2024-12-09 10:28:42.584015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:37:13.214 [2024-12-09 10:28:42.584053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.825 ms 00:37:13.214 [2024-12-09 10:28:42.584068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:13.214 [2024-12-09 10:28:42.584116] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:37:13.214 [2024-12-09 10:28:42.584159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:37:13.214 [2024-12-09 10:28:42.584174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:37:13.214 [2024-12-09 10:28:42.584186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:37:13.214 [2024-12-09 10:28:42.584197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:37:13.214 [2024-12-09 10:28:42.584211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:37:13.214 [2024-12-09 10:28:42.584246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:37:13.214 [2024-12-09 10:28:42.584259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:37:13.214 [2024-12-09 10:28:42.584270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:37:13.214 [2024-12-09 10:28:42.584284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:37:13.214 [2024-12-09 10:28:42.584304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:37:13.214 [2024-12-09 10:28:42.584317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:37:13.214 [2024-12-09 10:28:42.584336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:37:13.214 [2024-12-09 10:28:42.584347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:37:13.214 [2024-12-09 10:28:42.584363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:37:13.214 [2024-12-09 10:28:42.584382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:37:13.214 [2024-12-09 10:28:42.584394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:37:13.214 [2024-12-09 10:28:42.584405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:37:13.214 [2024-12-09 10:28:42.584416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:37:13.214 [2024-12-09 10:28:42.584432] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:37:13.214 [2024-12-09 10:28:42.584448] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 802f82c2-d2ee-43f6-a065-5842431e7a2d 00:37:13.214 [2024-12-09 10:28:42.584460] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:37:13.214 [2024-12-09 10:28:42.584472] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:37:13.214 [2024-12-09 10:28:42.584482] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:37:13.214 [2024-12-09 10:28:42.584494] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:37:13.214 [2024-12-09 10:28:42.584508] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:37:13.214 [2024-12-09 10:28:42.584538] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:37:13.215 [2024-12-09 10:28:42.584551] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:37:13.215 [2024-12-09 10:28:42.584561] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:37:13.215 [2024-12-09 10:28:42.584570] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:37:13.215 [2024-12-09 10:28:42.584587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:13.215 [2024-12-09 10:28:42.584607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:37:13.215 [2024-12-09 10:28:42.584619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.471 ms 00:37:13.215 [2024-12-09 10:28:42.584630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:13.215 [2024-12-09 10:28:42.600324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:13.215 [2024-12-09 10:28:42.600374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:37:13.215 [2024-12-09 10:28:42.600405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.626 ms 00:37:13.215 [2024-12-09 10:28:42.600423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:13.215 [2024-12-09 10:28:42.600945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:13.215 [2024-12-09 10:28:42.600969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:37:13.215 [2024-12-09 10:28:42.600990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.495 ms 00:37:13.215 [2024-12-09 10:28:42.601016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:13.215 [2024-12-09 10:28:42.652568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:13.215 [2024-12-09 10:28:42.652640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:37:13.215 [2024-12-09 10:28:42.652679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:13.215 [2024-12-09 10:28:42.652690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:13.215 [2024-12-09 10:28:42.652742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:13.215 [2024-12-09 10:28:42.652757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:37:13.215 [2024-12-09 10:28:42.652768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:13.215 [2024-12-09 10:28:42.652779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:13.215 [2024-12-09 10:28:42.652940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:13.215 [2024-12-09 10:28:42.652960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:37:13.215 [2024-12-09 10:28:42.652973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:13.215 [2024-12-09 10:28:42.652991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:13.215 [2024-12-09 10:28:42.653016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:13.215 [2024-12-09 10:28:42.653029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:37:13.215 [2024-12-09 10:28:42.653041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:13.215 [2024-12-09 10:28:42.653052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:13.215 [2024-12-09 10:28:42.748499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:13.215 [2024-12-09 10:28:42.748589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:37:13.215 [2024-12-09 10:28:42.748625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:13.215 [2024-12-09 10:28:42.748644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:13.215 [2024-12-09 10:28:42.830089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:13.215 [2024-12-09 10:28:42.830168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:37:13.215 [2024-12-09 10:28:42.830203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:13.215 [2024-12-09 10:28:42.830215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:13.215 [2024-12-09 10:28:42.830325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:13.215 [2024-12-09 10:28:42.830343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:37:13.215 [2024-12-09 10:28:42.830355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:13.215 [2024-12-09 10:28:42.830366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:13.215 [2024-12-09 10:28:42.830487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:13.215 [2024-12-09 10:28:42.830506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:37:13.215 [2024-12-09 10:28:42.830518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:13.215 [2024-12-09 10:28:42.830529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:13.215 [2024-12-09 10:28:42.830690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:13.215 [2024-12-09 10:28:42.830711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:37:13.215 [2024-12-09 10:28:42.830725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:13.215 [2024-12-09 10:28:42.830736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:13.215 [2024-12-09 10:28:42.830804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:13.215 [2024-12-09 10:28:42.830828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:37:13.215 [2024-12-09 10:28:42.830840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:13.215 [2024-12-09 10:28:42.830851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:13.215 [2024-12-09 10:28:42.830956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:13.215 [2024-12-09 10:28:42.830973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:37:13.215 [2024-12-09 10:28:42.830984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:13.215 [2024-12-09 10:28:42.830995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:13.215 [2024-12-09 10:28:42.831070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:13.215 [2024-12-09 10:28:42.831087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:37:13.215 [2024-12-09 10:28:42.831099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:13.215 [2024-12-09 10:28:42.831110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:13.215 [2024-12-09 10:28:42.831289] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8872.873 ms, result 0 00:37:15.746 10:28:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:37:15.746 10:28:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:37:15.746 10:28:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:37:15.746 10:28:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:37:15.746 10:28:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:37:15.746 10:28:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84950 00:37:15.746 10:28:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:37:15.746 10:28:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:37:15.746 10:28:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84950 00:37:15.746 10:28:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84950 ']' 00:37:15.746 10:28:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:15.746 10:28:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:15.746 10:28:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:15.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:15.747 10:28:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:15.747 10:28:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:37:15.747 [2024-12-09 10:28:46.160604] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:37:15.747 [2024-12-09 10:28:46.160806] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84950 ] 00:37:15.747 [2024-12-09 10:28:46.346238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:15.747 [2024-12-09 10:28:46.465811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:16.682 [2024-12-09 10:28:47.370314] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:37:16.682 [2024-12-09 10:28:47.370444] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:37:16.941 [2024-12-09 10:28:47.518535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:16.941 [2024-12-09 10:28:47.518624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:37:16.941 [2024-12-09 10:28:47.518663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:37:16.941 [2024-12-09 10:28:47.518676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:16.941 [2024-12-09 10:28:47.518755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:16.941 [2024-12-09 10:28:47.518774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:37:16.941 [2024-12-09 10:28:47.518788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 00:37:16.941 [2024-12-09 10:28:47.518799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:16.941 [2024-12-09 10:28:47.518841] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:37:16.941 [2024-12-09 10:28:47.519788] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:37:16.941 [2024-12-09 10:28:47.519891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:16.941 [2024-12-09 10:28:47.519908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:37:16.941 [2024-12-09 10:28:47.519921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.065 ms 00:37:16.941 [2024-12-09 10:28:47.519932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:16.941 [2024-12-09 10:28:47.522443] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:37:16.941 [2024-12-09 10:28:47.540600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:16.941 [2024-12-09 10:28:47.540656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:37:16.941 [2024-12-09 10:28:47.540698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.158 ms 00:37:16.941 [2024-12-09 10:28:47.540710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:16.941 [2024-12-09 10:28:47.540785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:16.941 [2024-12-09 10:28:47.540805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:37:16.941 [2024-12-09 10:28:47.540818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:37:16.941 [2024-12-09 10:28:47.540841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:16.941 [2024-12-09 10:28:47.550709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:16.942 [2024-12-09 10:28:47.550752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:37:16.942 [2024-12-09 10:28:47.550784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.743 ms 00:37:16.942 [2024-12-09 10:28:47.550797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:16.942 [2024-12-09 10:28:47.550899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:16.942 [2024-12-09 10:28:47.550936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:37:16.942 [2024-12-09 10:28:47.550950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.065 ms 00:37:16.942 [2024-12-09 10:28:47.550975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:16.942 [2024-12-09 10:28:47.551090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:16.942 [2024-12-09 10:28:47.551114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:37:16.942 [2024-12-09 10:28:47.551144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:37:16.942 [2024-12-09 10:28:47.551156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:16.942 [2024-12-09 10:28:47.551196] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:37:16.942 [2024-12-09 10:28:47.556837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:16.942 [2024-12-09 10:28:47.556897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:37:16.942 [2024-12-09 10:28:47.556930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.651 ms 00:37:16.942 [2024-12-09 10:28:47.556948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:16.942 [2024-12-09 10:28:47.556994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:16.942 [2024-12-09 10:28:47.557011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:37:16.942 [2024-12-09 10:28:47.557040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:37:16.942 [2024-12-09 10:28:47.557063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:16.942 [2024-12-09 10:28:47.557121] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:37:16.942 [2024-12-09 10:28:47.557190] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:37:16.942 [2024-12-09 10:28:47.557233] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:37:16.942 [2024-12-09 10:28:47.557254] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:37:16.942 [2024-12-09 10:28:47.557367] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:37:16.942 [2024-12-09 10:28:47.557382] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:37:16.942 [2024-12-09 10:28:47.557398] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:37:16.942 [2024-12-09 10:28:47.557414] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:37:16.942 [2024-12-09 10:28:47.557429] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:37:16.942 [2024-12-09 10:28:47.557449] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:37:16.942 [2024-12-09 10:28:47.557460] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:37:16.942 [2024-12-09 10:28:47.557472] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:37:16.942 [2024-12-09 10:28:47.557484] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:37:16.942 [2024-12-09 10:28:47.557496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:16.942 [2024-12-09 10:28:47.557508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:37:16.942 [2024-12-09 10:28:47.557520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.379 ms 00:37:16.942 [2024-12-09 10:28:47.557531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:16.942 [2024-12-09 10:28:47.557633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:16.942 [2024-12-09 10:28:47.557650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:37:16.942 [2024-12-09 10:28:47.557670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:37:16.942 [2024-12-09 10:28:47.557681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:16.942 [2024-12-09 10:28:47.557812] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:37:16.942 [2024-12-09 10:28:47.557830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:37:16.942 [2024-12-09 10:28:47.557842] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:37:16.942 [2024-12-09 10:28:47.557854] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:16.942 [2024-12-09 10:28:47.557882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:37:16.942 [2024-12-09 10:28:47.557892] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:37:16.942 [2024-12-09 10:28:47.557904] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:37:16.942 [2024-12-09 10:28:47.557916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:37:16.942 [2024-12-09 10:28:47.557927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:37:16.942 [2024-12-09 10:28:47.557951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:16.942 [2024-12-09 10:28:47.557965] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:37:16.942 [2024-12-09 10:28:47.557976] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:37:16.942 [2024-12-09 10:28:47.557987] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:16.942 [2024-12-09 10:28:47.557998] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:37:16.942 [2024-12-09 10:28:47.558009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:37:16.942 [2024-12-09 10:28:47.558019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:16.942 [2024-12-09 10:28:47.558030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:37:16.942 [2024-12-09 10:28:47.558040] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:37:16.942 [2024-12-09 10:28:47.558051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:16.942 [2024-12-09 10:28:47.558062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:37:16.942 [2024-12-09 10:28:47.558072] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:37:16.942 [2024-12-09 10:28:47.558086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:37:16.942 [2024-12-09 10:28:47.558097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:37:16.942 [2024-12-09 10:28:47.558123] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:37:16.942 [2024-12-09 10:28:47.558135] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:37:16.942 [2024-12-09 10:28:47.558146] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:37:16.942 [2024-12-09 10:28:47.558157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:37:16.942 [2024-12-09 10:28:47.558168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:37:16.942 [2024-12-09 10:28:47.558178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:37:16.942 [2024-12-09 10:28:47.558191] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:37:16.942 [2024-12-09 10:28:47.558202] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:37:16.942 [2024-12-09 10:28:47.558213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:37:16.942 [2024-12-09 10:28:47.558224] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:37:16.942 [2024-12-09 10:28:47.558235] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:16.942 [2024-12-09 10:28:47.558246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:37:16.942 [2024-12-09 10:28:47.558256] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:37:16.942 [2024-12-09 10:28:47.558267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:16.942 [2024-12-09 10:28:47.558278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:37:16.942 [2024-12-09 10:28:47.558289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:37:16.942 [2024-12-09 10:28:47.558300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:16.942 [2024-12-09 10:28:47.558311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:37:16.942 [2024-12-09 10:28:47.558322] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:37:16.942 [2024-12-09 10:28:47.558333] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:16.942 [2024-12-09 10:28:47.558343] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:37:16.942 [2024-12-09 10:28:47.558356] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:37:16.942 [2024-12-09 10:28:47.558368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:37:16.942 [2024-12-09 10:28:47.558381] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:16.942 [2024-12-09 10:28:47.558398] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:37:16.942 [2024-12-09 10:28:47.558410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:37:16.942 [2024-12-09 10:28:47.558436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:37:16.942 [2024-12-09 10:28:47.558448] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:37:16.942 [2024-12-09 10:28:47.558458] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:37:16.942 [2024-12-09 10:28:47.558469] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:37:16.942 [2024-12-09 10:28:47.558481] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:37:16.942 [2024-12-09 10:28:47.558496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:37:16.942 [2024-12-09 10:28:47.558524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:37:16.942 [2024-12-09 10:28:47.558537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:37:16.942 [2024-12-09 10:28:47.558550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:37:16.942 [2024-12-09 10:28:47.558561] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:37:16.942 [2024-12-09 10:28:47.558574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:37:16.942 [2024-12-09 10:28:47.558585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:37:16.942 [2024-12-09 10:28:47.558619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:37:16.942 [2024-12-09 10:28:47.558633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:37:16.942 [2024-12-09 10:28:47.558645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:37:16.943 [2024-12-09 10:28:47.558656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:37:16.943 [2024-12-09 10:28:47.558670] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:37:16.943 [2024-12-09 10:28:47.558682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:37:16.943 [2024-12-09 10:28:47.558693] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:37:16.943 [2024-12-09 10:28:47.558705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:37:16.943 [2024-12-09 10:28:47.558717] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:37:16.943 [2024-12-09 10:28:47.558730] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:37:16.943 [2024-12-09 10:28:47.558743] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:37:16.943 [2024-12-09 10:28:47.558754] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:37:16.943 [2024-12-09 10:28:47.558766] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:37:16.943 [2024-12-09 10:28:47.558778] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:37:16.943 [2024-12-09 10:28:47.558790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:16.943 [2024-12-09 10:28:47.558802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:37:16.943 [2024-12-09 10:28:47.558815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.059 ms 00:37:16.943 [2024-12-09 10:28:47.558840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:16.943 [2024-12-09 10:28:47.558911] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:37:16.943 [2024-12-09 10:28:47.558931] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:37:20.228 [2024-12-09 10:28:50.575900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:20.228 [2024-12-09 10:28:50.576004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:37:20.228 [2024-12-09 10:28:50.576044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3017.001 ms 00:37:20.228 [2024-12-09 10:28:50.576057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:20.228 [2024-12-09 10:28:50.622533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:20.228 [2024-12-09 10:28:50.622644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:37:20.228 [2024-12-09 10:28:50.622669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 46.144 ms 00:37:20.228 [2024-12-09 10:28:50.622683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:20.228 [2024-12-09 10:28:50.622873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:20.228 [2024-12-09 10:28:50.622896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:37:20.228 [2024-12-09 10:28:50.622926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:37:20.228 [2024-12-09 10:28:50.622939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:20.228 [2024-12-09 10:28:50.672621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:20.228 [2024-12-09 10:28:50.672679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:37:20.228 [2024-12-09 10:28:50.672721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 49.557 ms 00:37:20.228 [2024-12-09 10:28:50.672732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:20.228 [2024-12-09 10:28:50.672803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:20.228 [2024-12-09 10:28:50.672820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:37:20.228 [2024-12-09 10:28:50.672833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:37:20.228 [2024-12-09 10:28:50.672856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:20.228 [2024-12-09 10:28:50.673771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:20.228 [2024-12-09 10:28:50.673804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:37:20.228 [2024-12-09 10:28:50.673819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.746 ms 00:37:20.228 [2024-12-09 10:28:50.673860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:20.228 [2024-12-09 10:28:50.673951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:20.228 [2024-12-09 10:28:50.674000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:37:20.228 [2024-12-09 10:28:50.674014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:37:20.228 [2024-12-09 10:28:50.674026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:20.228 [2024-12-09 10:28:50.696500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:20.228 [2024-12-09 10:28:50.696563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:37:20.229 [2024-12-09 10:28:50.696602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.443 ms 00:37:20.229 [2024-12-09 10:28:50.696622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:20.229 [2024-12-09 10:28:50.721158] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:37:20.229 [2024-12-09 10:28:50.721220] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:37:20.229 [2024-12-09 10:28:50.721244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:20.229 [2024-12-09 10:28:50.721257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:37:20.229 [2024-12-09 10:28:50.721276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.373 ms 00:37:20.229 [2024-12-09 10:28:50.721288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:20.229 [2024-12-09 10:28:50.736952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:20.229 [2024-12-09 10:28:50.736989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:37:20.229 [2024-12-09 10:28:50.737021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.538 ms 00:37:20.229 [2024-12-09 10:28:50.737033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:20.229 [2024-12-09 10:28:50.751663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:20.229 [2024-12-09 10:28:50.751700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:37:20.229 [2024-12-09 10:28:50.751731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.566 ms 00:37:20.229 [2024-12-09 10:28:50.751741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:20.229 [2024-12-09 10:28:50.766766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:20.229 [2024-12-09 10:28:50.766807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:37:20.229 [2024-12-09 10:28:50.766848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.980 ms 00:37:20.229 [2024-12-09 10:28:50.766862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:20.229 [2024-12-09 10:28:50.767929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:20.229 [2024-12-09 10:28:50.768021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:37:20.229 [2024-12-09 10:28:50.768038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.853 ms 00:37:20.229 [2024-12-09 10:28:50.768050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:20.229 [2024-12-09 10:28:50.841363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:20.229 [2024-12-09 10:28:50.841473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:37:20.229 [2024-12-09 10:28:50.841518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 73.282 ms 00:37:20.229 [2024-12-09 10:28:50.841532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:20.229 [2024-12-09 10:28:50.852778] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:37:20.229 [2024-12-09 10:28:50.853745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:20.229 [2024-12-09 10:28:50.853793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:37:20.229 [2024-12-09 10:28:50.853825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.117 ms 00:37:20.229 [2024-12-09 10:28:50.853852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:20.229 [2024-12-09 10:28:50.853998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:20.229 [2024-12-09 10:28:50.854039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:37:20.229 [2024-12-09 10:28:50.854054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:37:20.229 [2024-12-09 10:28:50.854066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:20.229 [2024-12-09 10:28:50.854152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:20.229 [2024-12-09 10:28:50.854179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:37:20.229 [2024-12-09 10:28:50.854193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:37:20.229 [2024-12-09 10:28:50.854219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:20.229 [2024-12-09 10:28:50.854257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:20.229 [2024-12-09 10:28:50.854273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:37:20.229 [2024-12-09 10:28:50.854292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:37:20.229 [2024-12-09 10:28:50.854304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:20.229 [2024-12-09 10:28:50.854353] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:37:20.229 [2024-12-09 10:28:50.854371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:20.229 [2024-12-09 10:28:50.854384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:37:20.229 [2024-12-09 10:28:50.854396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:37:20.229 [2024-12-09 10:28:50.854407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:20.229 [2024-12-09 10:28:50.885347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:20.229 [2024-12-09 10:28:50.885408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:37:20.229 [2024-12-09 10:28:50.885440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.892 ms 00:37:20.229 [2024-12-09 10:28:50.885451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:20.229 [2024-12-09 10:28:50.885540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:20.229 [2024-12-09 10:28:50.885558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:37:20.229 [2024-12-09 10:28:50.885570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:37:20.229 [2024-12-09 10:28:50.885581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:20.229 [2024-12-09 10:28:50.887503] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3368.248 ms, result 0 00:37:20.229 [2024-12-09 10:28:50.901828] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:20.229 [2024-12-09 10:28:50.917791] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:37:20.229 [2024-12-09 10:28:50.926117] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:20.229 10:28:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:20.229 10:28:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:37:20.229 10:28:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:37:20.229 10:28:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:37:20.229 10:28:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:37:20.487 [2024-12-09 10:28:51.178278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:20.487 [2024-12-09 10:28:51.178389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:37:20.487 [2024-12-09 10:28:51.178443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:37:20.487 [2024-12-09 10:28:51.178459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:20.487 [2024-12-09 10:28:51.178503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:20.487 [2024-12-09 10:28:51.178522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:37:20.487 [2024-12-09 10:28:51.178535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:37:20.487 [2024-12-09 10:28:51.178553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:20.487 [2024-12-09 10:28:51.178586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:20.487 [2024-12-09 10:28:51.178639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:37:20.487 [2024-12-09 10:28:51.178657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:37:20.487 [2024-12-09 10:28:51.178675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:20.487 [2024-12-09 10:28:51.178805] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.530 ms, result 0 00:37:20.487 true 00:37:20.487 10:28:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:37:20.745 { 00:37:20.745 "name": "ftl", 00:37:20.745 "properties": [ 00:37:20.745 { 00:37:20.745 "name": "superblock_version", 00:37:20.745 "value": 5, 00:37:20.745 "read-only": true 00:37:20.745 }, 00:37:20.745 { 00:37:20.745 "name": "base_device", 00:37:20.745 "bands": [ 00:37:20.745 { 00:37:20.745 "id": 0, 00:37:20.745 "state": "CLOSED", 00:37:20.745 "validity": 1.0 00:37:20.745 }, 00:37:20.745 { 00:37:20.745 "id": 1, 00:37:20.745 "state": "CLOSED", 00:37:20.745 "validity": 1.0 00:37:20.745 }, 00:37:20.745 { 00:37:20.745 "id": 2, 00:37:20.745 "state": "CLOSED", 00:37:20.745 "validity": 0.007843137254901933 00:37:20.745 }, 00:37:20.745 { 00:37:20.745 "id": 3, 00:37:20.745 "state": "FREE", 00:37:20.745 "validity": 0.0 00:37:20.745 }, 00:37:20.745 { 00:37:20.745 "id": 4, 00:37:20.745 "state": "FREE", 00:37:20.745 "validity": 0.0 00:37:20.745 }, 00:37:20.745 { 00:37:20.745 "id": 5, 00:37:20.745 "state": "FREE", 00:37:20.745 "validity": 0.0 00:37:20.745 }, 00:37:20.745 { 00:37:20.745 "id": 6, 00:37:20.745 "state": "FREE", 00:37:20.745 "validity": 0.0 00:37:20.745 }, 00:37:20.745 { 00:37:20.745 "id": 7, 00:37:20.745 "state": "FREE", 00:37:20.745 "validity": 0.0 00:37:20.745 }, 00:37:20.745 { 00:37:20.745 "id": 8, 00:37:20.745 "state": "FREE", 00:37:20.745 "validity": 0.0 00:37:20.745 }, 00:37:20.745 { 00:37:20.745 "id": 9, 00:37:20.745 "state": "FREE", 00:37:20.745 "validity": 0.0 00:37:20.745 }, 00:37:20.745 { 00:37:20.745 "id": 10, 00:37:20.745 "state": "FREE", 00:37:20.745 "validity": 0.0 00:37:20.745 }, 00:37:20.745 { 00:37:20.745 "id": 11, 00:37:20.745 "state": "FREE", 00:37:20.745 "validity": 0.0 00:37:20.745 }, 00:37:20.745 { 00:37:20.745 "id": 12, 00:37:20.745 "state": "FREE", 00:37:20.745 "validity": 0.0 00:37:20.745 }, 00:37:20.745 { 00:37:20.745 "id": 13, 00:37:20.745 "state": "FREE", 00:37:20.745 "validity": 0.0 00:37:20.745 }, 00:37:20.745 { 00:37:20.745 "id": 14, 00:37:20.745 "state": "FREE", 00:37:20.745 "validity": 0.0 00:37:20.745 }, 00:37:20.745 { 00:37:20.745 "id": 15, 00:37:20.745 "state": "FREE", 00:37:20.745 "validity": 0.0 00:37:20.745 }, 00:37:20.745 { 00:37:20.745 "id": 16, 00:37:20.745 "state": "FREE", 00:37:20.745 "validity": 0.0 00:37:20.745 }, 00:37:20.745 { 00:37:20.745 "id": 17, 00:37:20.745 "state": "FREE", 00:37:20.745 "validity": 0.0 00:37:20.745 } 00:37:20.745 ], 00:37:20.745 "read-only": true 00:37:20.745 }, 00:37:20.745 { 00:37:20.745 "name": "cache_device", 00:37:20.745 "type": "bdev", 00:37:20.745 "chunks": [ 00:37:20.745 { 00:37:20.745 "id": 0, 00:37:20.745 "state": "INACTIVE", 00:37:20.745 "utilization": 0.0 00:37:20.745 }, 00:37:20.745 { 00:37:20.745 "id": 1, 00:37:20.745 "state": "OPEN", 00:37:20.745 "utilization": 0.0 00:37:20.745 }, 00:37:20.745 { 00:37:20.745 "id": 2, 00:37:20.745 "state": "OPEN", 00:37:20.745 "utilization": 0.0 00:37:20.745 }, 00:37:20.745 { 00:37:20.745 "id": 3, 00:37:20.745 "state": "FREE", 00:37:20.745 "utilization": 0.0 00:37:20.745 }, 00:37:20.745 { 00:37:20.745 "id": 4, 00:37:20.745 "state": "FREE", 00:37:20.745 "utilization": 0.0 00:37:20.745 } 00:37:20.745 ], 00:37:20.745 "read-only": true 00:37:20.745 }, 00:37:20.745 { 00:37:20.745 "name": "verbose_mode", 00:37:20.745 "value": true, 00:37:20.745 "unit": "", 00:37:20.745 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:37:20.745 }, 00:37:20.745 { 00:37:20.745 "name": "prep_upgrade_on_shutdown", 00:37:20.745 "value": false, 00:37:20.745 "unit": "", 00:37:20.745 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:37:20.745 } 00:37:20.745 ] 00:37:20.745 } 00:37:20.745 10:28:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:37:20.745 10:28:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:37:20.745 10:28:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:37:21.004 10:28:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:37:21.004 10:28:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:37:21.004 10:28:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:37:21.004 10:28:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:37:21.004 10:28:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:37:21.262 Validate MD5 checksum, iteration 1 00:37:21.262 10:28:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:37:21.262 10:28:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:37:21.262 10:28:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:37:21.262 10:28:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:37:21.262 10:28:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:37:21.262 10:28:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:37:21.262 10:28:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:37:21.262 10:28:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:37:21.262 10:28:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:37:21.262 10:28:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:37:21.262 10:28:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:37:21.262 10:28:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:37:21.262 10:28:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:37:21.520 [2024-12-09 10:28:52.135033] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:37:21.520 [2024-12-09 10:28:52.135993] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85026 ] 00:37:21.779 [2024-12-09 10:28:52.336726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:21.779 [2024-12-09 10:28:52.494317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:23.679  [2024-12-09T10:28:55.411Z] Copying: 469/1024 [MB] (469 MBps) [2024-12-09T10:28:55.411Z] Copying: 926/1024 [MB] (457 MBps) [2024-12-09T10:28:57.311Z] Copying: 1024/1024 [MB] (average 462 MBps) 00:37:26.514 00:37:26.514 10:28:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:37:26.514 10:28:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:37:28.417 10:28:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:37:28.417 Validate MD5 checksum, iteration 2 00:37:28.417 10:28:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=f15f8d3224ba46d7e1965ebc9801bbaf 00:37:28.417 10:28:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ f15f8d3224ba46d7e1965ebc9801bbaf != \f\1\5\f\8\d\3\2\2\4\b\a\4\6\d\7\e\1\9\6\5\e\b\c\9\8\0\1\b\b\a\f ]] 00:37:28.417 10:28:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:37:28.417 10:28:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:37:28.417 10:28:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:37:28.417 10:28:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:37:28.417 10:28:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:37:28.417 10:28:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:37:28.417 10:28:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:37:28.417 10:28:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:37:28.417 10:28:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:37:28.676 [2024-12-09 10:28:59.253863] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:37:28.676 [2024-12-09 10:28:59.254049] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85104 ] 00:37:28.676 [2024-12-09 10:28:59.434844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:28.934 [2024-12-09 10:28:59.581586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:30.857  [2024-12-09T10:29:02.230Z] Copying: 475/1024 [MB] (475 MBps) [2024-12-09T10:29:02.489Z] Copying: 951/1024 [MB] (476 MBps) [2024-12-09T10:29:05.027Z] Copying: 1024/1024 [MB] (average 465 MBps) 00:37:34.230 00:37:34.230 10:29:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:37:34.230 10:29:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:37:36.779 10:29:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:37:36.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:36.779 10:29:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=a24d83cbecb8043da2f316bbb1765311 00:37:36.779 10:29:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ a24d83cbecb8043da2f316bbb1765311 != \a\2\4\d\8\3\c\b\e\c\b\8\0\4\3\d\a\2\f\3\1\6\b\b\b\1\7\6\5\3\1\1 ]] 00:37:36.779 10:29:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:37:36.779 10:29:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:37:36.779 10:29:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:37:36.779 10:29:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 84950 ]] 00:37:36.779 10:29:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 84950 00:37:36.779 10:29:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:37:36.779 10:29:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:37:36.779 10:29:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:37:36.779 10:29:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:37:36.779 10:29:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:37:36.779 10:29:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=85181 00:37:36.779 10:29:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:37:36.779 10:29:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 85181 00:37:36.779 10:29:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 85181 ']' 00:37:36.779 10:29:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:36.779 10:29:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:36.779 10:29:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:36.779 10:29:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:37:36.779 10:29:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:36.779 10:29:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:37:36.779 [2024-12-09 10:29:07.309151] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:37:36.779 [2024-12-09 10:29:07.310165] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85181 ] 00:37:36.779 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 84950 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:37:36.779 [2024-12-09 10:29:07.491399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:37.038 [2024-12-09 10:29:07.626733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:37.973 [2024-12-09 10:29:08.616482] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:37:37.973 [2024-12-09 10:29:08.616593] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:37:38.232 [2024-12-09 10:29:08.770629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:38.232 [2024-12-09 10:29:08.770723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:37:38.233 [2024-12-09 10:29:08.770760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:37:38.233 [2024-12-09 10:29:08.770772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:38.233 [2024-12-09 10:29:08.770881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:38.233 [2024-12-09 10:29:08.770916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:37:38.233 [2024-12-09 10:29:08.770930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.078 ms 00:37:38.233 [2024-12-09 10:29:08.770956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:38.233 [2024-12-09 10:29:08.770999] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:37:38.233 [2024-12-09 10:29:08.772114] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:37:38.233 [2024-12-09 10:29:08.772168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:38.233 [2024-12-09 10:29:08.772213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:37:38.233 [2024-12-09 10:29:08.772227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.185 ms 00:37:38.233 [2024-12-09 10:29:08.772238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:38.233 [2024-12-09 10:29:08.772944] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:37:38.233 [2024-12-09 10:29:08.796164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:38.233 [2024-12-09 10:29:08.796215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:37:38.233 [2024-12-09 10:29:08.796234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.220 ms 00:37:38.233 [2024-12-09 10:29:08.796248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:38.233 [2024-12-09 10:29:08.808711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:38.233 [2024-12-09 10:29:08.808769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:37:38.233 [2024-12-09 10:29:08.808786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:37:38.233 [2024-12-09 10:29:08.808798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:38.233 [2024-12-09 10:29:08.809305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:38.233 [2024-12-09 10:29:08.809341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:37:38.233 [2024-12-09 10:29:08.809357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.381 ms 00:37:38.233 [2024-12-09 10:29:08.809376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:38.233 [2024-12-09 10:29:08.809452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:38.233 [2024-12-09 10:29:08.809472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:37:38.233 [2024-12-09 10:29:08.809484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.048 ms 00:37:38.233 [2024-12-09 10:29:08.809495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:38.233 [2024-12-09 10:29:08.809535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:38.233 [2024-12-09 10:29:08.809551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:37:38.233 [2024-12-09 10:29:08.809564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:37:38.233 [2024-12-09 10:29:08.809575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:38.233 [2024-12-09 10:29:08.809611] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:37:38.233 [2024-12-09 10:29:08.813556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:38.233 [2024-12-09 10:29:08.813623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:37:38.233 [2024-12-09 10:29:08.813638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.954 ms 00:37:38.233 [2024-12-09 10:29:08.813655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:38.233 [2024-12-09 10:29:08.813693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:38.233 [2024-12-09 10:29:08.813709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:37:38.233 [2024-12-09 10:29:08.813722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:37:38.233 [2024-12-09 10:29:08.813732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:38.233 [2024-12-09 10:29:08.813781] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:37:38.233 [2024-12-09 10:29:08.813814] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:37:38.233 [2024-12-09 10:29:08.813873] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:37:38.233 [2024-12-09 10:29:08.813901] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:37:38.233 [2024-12-09 10:29:08.814013] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:37:38.233 [2024-12-09 10:29:08.814030] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:37:38.233 [2024-12-09 10:29:08.814045] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:37:38.233 [2024-12-09 10:29:08.814060] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:37:38.233 [2024-12-09 10:29:08.814074] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:37:38.233 [2024-12-09 10:29:08.814087] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:37:38.233 [2024-12-09 10:29:08.814099] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:37:38.233 [2024-12-09 10:29:08.814110] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:37:38.233 [2024-12-09 10:29:08.814127] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:37:38.233 [2024-12-09 10:29:08.814140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:38.233 [2024-12-09 10:29:08.814151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:37:38.233 [2024-12-09 10:29:08.814163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.361 ms 00:37:38.233 [2024-12-09 10:29:08.814175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:38.233 [2024-12-09 10:29:08.814270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:38.233 [2024-12-09 10:29:08.814299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:37:38.233 [2024-12-09 10:29:08.814312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:37:38.233 [2024-12-09 10:29:08.814323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:38.233 [2024-12-09 10:29:08.814441] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:37:38.233 [2024-12-09 10:29:08.814464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:37:38.233 [2024-12-09 10:29:08.814477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:37:38.233 [2024-12-09 10:29:08.814488] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:38.233 [2024-12-09 10:29:08.814500] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:37:38.233 [2024-12-09 10:29:08.814511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:37:38.233 [2024-12-09 10:29:08.814522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:37:38.233 [2024-12-09 10:29:08.814533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:37:38.233 [2024-12-09 10:29:08.814543] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:37:38.233 [2024-12-09 10:29:08.814554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:38.233 [2024-12-09 10:29:08.814564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:37:38.233 [2024-12-09 10:29:08.814574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:37:38.233 [2024-12-09 10:29:08.814585] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:38.233 [2024-12-09 10:29:08.814595] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:37:38.233 [2024-12-09 10:29:08.814623] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:37:38.233 [2024-12-09 10:29:08.814634] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:38.233 [2024-12-09 10:29:08.814645] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:37:38.233 [2024-12-09 10:29:08.814655] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:37:38.233 [2024-12-09 10:29:08.814665] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:38.233 [2024-12-09 10:29:08.814676] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:37:38.233 [2024-12-09 10:29:08.814692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:37:38.233 [2024-12-09 10:29:08.814717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:37:38.233 [2024-12-09 10:29:08.814728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:37:38.233 [2024-12-09 10:29:08.814739] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:37:38.233 [2024-12-09 10:29:08.814750] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:37:38.233 [2024-12-09 10:29:08.814761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:37:38.233 [2024-12-09 10:29:08.814773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:37:38.233 [2024-12-09 10:29:08.814783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:37:38.233 [2024-12-09 10:29:08.814794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:37:38.234 [2024-12-09 10:29:08.814805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:37:38.234 [2024-12-09 10:29:08.814816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:37:38.234 [2024-12-09 10:29:08.814846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:37:38.234 [2024-12-09 10:29:08.814862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:37:38.234 [2024-12-09 10:29:08.814873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:38.234 [2024-12-09 10:29:08.814884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:37:38.234 [2024-12-09 10:29:08.814895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:37:38.234 [2024-12-09 10:29:08.814906] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:38.234 [2024-12-09 10:29:08.814916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:37:38.234 [2024-12-09 10:29:08.814927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:37:38.234 [2024-12-09 10:29:08.814937] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:38.234 [2024-12-09 10:29:08.814949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:37:38.234 [2024-12-09 10:29:08.814960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:37:38.234 [2024-12-09 10:29:08.814970] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:38.234 [2024-12-09 10:29:08.814980] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:37:38.234 [2024-12-09 10:29:08.814993] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:37:38.234 [2024-12-09 10:29:08.815004] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:37:38.234 [2024-12-09 10:29:08.815016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:38.234 [2024-12-09 10:29:08.815028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:37:38.234 [2024-12-09 10:29:08.815039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:37:38.234 [2024-12-09 10:29:08.815050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:37:38.234 [2024-12-09 10:29:08.815060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:37:38.234 [2024-12-09 10:29:08.815070] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:37:38.234 [2024-12-09 10:29:08.815080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:37:38.234 [2024-12-09 10:29:08.815092] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:37:38.234 [2024-12-09 10:29:08.815106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:37:38.234 [2024-12-09 10:29:08.815119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:37:38.234 [2024-12-09 10:29:08.815131] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:37:38.234 [2024-12-09 10:29:08.815143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:37:38.234 [2024-12-09 10:29:08.815162] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:37:38.234 [2024-12-09 10:29:08.815174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:37:38.234 [2024-12-09 10:29:08.815186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:37:38.234 [2024-12-09 10:29:08.815198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:37:38.234 [2024-12-09 10:29:08.815210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:37:38.234 [2024-12-09 10:29:08.815221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:37:38.234 [2024-12-09 10:29:08.815233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:37:38.234 [2024-12-09 10:29:08.815245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:37:38.234 [2024-12-09 10:29:08.815256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:37:38.234 [2024-12-09 10:29:08.815268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:37:38.234 [2024-12-09 10:29:08.815279] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:37:38.234 [2024-12-09 10:29:08.815290] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:37:38.234 [2024-12-09 10:29:08.815309] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:37:38.234 [2024-12-09 10:29:08.815323] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:37:38.234 [2024-12-09 10:29:08.815334] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:37:38.234 [2024-12-09 10:29:08.815346] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:37:38.234 [2024-12-09 10:29:08.815357] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:37:38.234 [2024-12-09 10:29:08.815370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:38.234 [2024-12-09 10:29:08.815381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:37:38.234 [2024-12-09 10:29:08.815393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.997 ms 00:37:38.234 [2024-12-09 10:29:08.815404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:38.234 [2024-12-09 10:29:08.854542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:38.234 [2024-12-09 10:29:08.854657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:37:38.234 [2024-12-09 10:29:08.854681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 39.060 ms 00:37:38.234 [2024-12-09 10:29:08.854694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:38.234 [2024-12-09 10:29:08.854767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:38.234 [2024-12-09 10:29:08.854784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:37:38.234 [2024-12-09 10:29:08.854799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:37:38.234 [2024-12-09 10:29:08.854811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:38.234 [2024-12-09 10:29:08.904142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:38.234 [2024-12-09 10:29:08.904225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:37:38.234 [2024-12-09 10:29:08.904262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 49.213 ms 00:37:38.234 [2024-12-09 10:29:08.904275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:38.234 [2024-12-09 10:29:08.904351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:38.234 [2024-12-09 10:29:08.904369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:37:38.234 [2024-12-09 10:29:08.904383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:37:38.234 [2024-12-09 10:29:08.904400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:38.234 [2024-12-09 10:29:08.904629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:38.234 [2024-12-09 10:29:08.904649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:37:38.234 [2024-12-09 10:29:08.904662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.081 ms 00:37:38.234 [2024-12-09 10:29:08.904674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:38.234 [2024-12-09 10:29:08.904741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:38.234 [2024-12-09 10:29:08.904765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:37:38.234 [2024-12-09 10:29:08.904780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:37:38.234 [2024-12-09 10:29:08.904800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:38.234 [2024-12-09 10:29:08.929173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:38.234 [2024-12-09 10:29:08.929243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:37:38.234 [2024-12-09 10:29:08.929267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.322 ms 00:37:38.234 [2024-12-09 10:29:08.929280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:38.234 [2024-12-09 10:29:08.929480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:38.234 [2024-12-09 10:29:08.929514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:37:38.234 [2024-12-09 10:29:08.929531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:37:38.234 [2024-12-09 10:29:08.929544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:38.234 [2024-12-09 10:29:08.963252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:38.234 [2024-12-09 10:29:08.963317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:37:38.234 [2024-12-09 10:29:08.963353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.670 ms 00:37:38.235 [2024-12-09 10:29:08.963366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:38.235 [2024-12-09 10:29:08.976284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:38.235 [2024-12-09 10:29:08.976348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:37:38.235 [2024-12-09 10:29:08.976381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.809 ms 00:37:38.235 [2024-12-09 10:29:08.976393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:38.494 [2024-12-09 10:29:09.056410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:38.494 [2024-12-09 10:29:09.056522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:37:38.494 [2024-12-09 10:29:09.056560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 79.932 ms 00:37:38.494 [2024-12-09 10:29:09.056573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:38.494 [2024-12-09 10:29:09.056856] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:37:38.494 [2024-12-09 10:29:09.057020] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:37:38.494 [2024-12-09 10:29:09.057178] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:37:38.494 [2024-12-09 10:29:09.057325] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:37:38.494 [2024-12-09 10:29:09.057355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:38.494 [2024-12-09 10:29:09.057368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:37:38.494 [2024-12-09 10:29:09.057381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.676 ms 00:37:38.494 [2024-12-09 10:29:09.057394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:38.494 [2024-12-09 10:29:09.057533] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:37:38.494 [2024-12-09 10:29:09.057566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:38.494 [2024-12-09 10:29:09.057579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:37:38.494 [2024-12-09 10:29:09.057593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:37:38.494 [2024-12-09 10:29:09.057605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:38.494 [2024-12-09 10:29:09.077650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:38.494 [2024-12-09 10:29:09.077708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:37:38.494 [2024-12-09 10:29:09.077741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.012 ms 00:37:38.494 [2024-12-09 10:29:09.077753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:38.494 [2024-12-09 10:29:09.089253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:38.494 [2024-12-09 10:29:09.089308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:37:38.494 [2024-12-09 10:29:09.089340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:37:38.494 [2024-12-09 10:29:09.089352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:38.494 [2024-12-09 10:29:09.089484] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:37:38.494 [2024-12-09 10:29:09.089821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:38.494 [2024-12-09 10:29:09.089877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:37:38.494 [2024-12-09 10:29:09.089893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.340 ms 00:37:38.494 [2024-12-09 10:29:09.089906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:39.062 [2024-12-09 10:29:09.748795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:39.062 [2024-12-09 10:29:09.748886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:37:39.062 [2024-12-09 10:29:09.748913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 657.759 ms 00:37:39.062 [2024-12-09 10:29:09.748928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:39.062 [2024-12-09 10:29:09.754170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:39.062 [2024-12-09 10:29:09.754214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:37:39.062 [2024-12-09 10:29:09.754233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.163 ms 00:37:39.062 [2024-12-09 10:29:09.754255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:39.062 [2024-12-09 10:29:09.754657] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:37:39.062 [2024-12-09 10:29:09.754697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:39.062 [2024-12-09 10:29:09.754710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:37:39.062 [2024-12-09 10:29:09.754724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.399 ms 00:37:39.062 [2024-12-09 10:29:09.754737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:39.062 [2024-12-09 10:29:09.754865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:39.062 [2024-12-09 10:29:09.754886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:37:39.062 [2024-12-09 10:29:09.754908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:37:39.062 [2024-12-09 10:29:09.754921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:39.062 [2024-12-09 10:29:09.754972] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 665.494 ms, result 0 00:37:39.062 [2024-12-09 10:29:09.755044] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:37:39.062 [2024-12-09 10:29:09.755266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:39.062 [2024-12-09 10:29:09.755287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:37:39.062 [2024-12-09 10:29:09.755300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.225 ms 00:37:39.062 [2024-12-09 10:29:09.755312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:39.629 [2024-12-09 10:29:10.365430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:39.629 [2024-12-09 10:29:10.365582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:37:39.629 [2024-12-09 10:29:10.365653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 608.996 ms 00:37:39.629 [2024-12-09 10:29:10.365666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:39.629 [2024-12-09 10:29:10.371313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:39.629 [2024-12-09 10:29:10.371370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:37:39.629 [2024-12-09 10:29:10.371404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.511 ms 00:37:39.629 [2024-12-09 10:29:10.371416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:39.629 [2024-12-09 10:29:10.371905] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:37:39.629 [2024-12-09 10:29:10.371954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:39.629 [2024-12-09 10:29:10.371968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:37:39.629 [2024-12-09 10:29:10.371981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.500 ms 00:37:39.629 [2024-12-09 10:29:10.371994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:39.629 [2024-12-09 10:29:10.372038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:39.629 [2024-12-09 10:29:10.372056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:37:39.629 [2024-12-09 10:29:10.372069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:37:39.629 [2024-12-09 10:29:10.372080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:39.629 [2024-12-09 10:29:10.372132] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 617.086 ms, result 0 00:37:39.630 [2024-12-09 10:29:10.372191] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:37:39.630 [2024-12-09 10:29:10.372210] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:37:39.630 [2024-12-09 10:29:10.372226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:39.630 [2024-12-09 10:29:10.372239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:37:39.630 [2024-12-09 10:29:10.372252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1282.784 ms 00:37:39.630 [2024-12-09 10:29:10.372265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:39.630 [2024-12-09 10:29:10.372315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:39.630 [2024-12-09 10:29:10.372331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:37:39.630 [2024-12-09 10:29:10.372344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:37:39.630 [2024-12-09 10:29:10.372356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:39.630 [2024-12-09 10:29:10.386664] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:37:39.630 [2024-12-09 10:29:10.386820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:39.630 [2024-12-09 10:29:10.386860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:37:39.630 [2024-12-09 10:29:10.386875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.441 ms 00:37:39.630 [2024-12-09 10:29:10.386887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:39.630 [2024-12-09 10:29:10.387709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:39.630 [2024-12-09 10:29:10.387765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:37:39.630 [2024-12-09 10:29:10.387810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.697 ms 00:37:39.630 [2024-12-09 10:29:10.387840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:39.630 [2024-12-09 10:29:10.390456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:39.630 [2024-12-09 10:29:10.390499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:37:39.630 [2024-12-09 10:29:10.390528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.591 ms 00:37:39.630 [2024-12-09 10:29:10.390539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:39.630 [2024-12-09 10:29:10.390589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:39.630 [2024-12-09 10:29:10.390631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:37:39.630 [2024-12-09 10:29:10.390652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:37:39.630 [2024-12-09 10:29:10.390664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:39.630 [2024-12-09 10:29:10.390798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:39.630 [2024-12-09 10:29:10.390820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:37:39.630 [2024-12-09 10:29:10.390833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:37:39.630 [2024-12-09 10:29:10.390858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:39.630 [2024-12-09 10:29:10.390891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:39.630 [2024-12-09 10:29:10.390906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:37:39.630 [2024-12-09 10:29:10.390919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:37:39.630 [2024-12-09 10:29:10.390936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:39.630 [2024-12-09 10:29:10.390985] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:37:39.630 [2024-12-09 10:29:10.391003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:39.630 [2024-12-09 10:29:10.391015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:37:39.630 [2024-12-09 10:29:10.391028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:37:39.630 [2024-12-09 10:29:10.391038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:39.630 [2024-12-09 10:29:10.391122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:39.630 [2024-12-09 10:29:10.391151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:37:39.630 [2024-12-09 10:29:10.391165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:37:39.630 [2024-12-09 10:29:10.391183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:39.630 [2024-12-09 10:29:10.392731] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1621.548 ms, result 0 00:37:39.630 [2024-12-09 10:29:10.407309] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:39.630 [2024-12-09 10:29:10.423344] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:37:39.889 [2024-12-09 10:29:10.433316] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:39.889 10:29:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:39.889 10:29:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:37:39.889 10:29:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:37:39.889 10:29:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:37:39.889 10:29:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:37:39.889 10:29:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:37:39.889 10:29:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:37:39.889 10:29:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:37:39.889 10:29:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:37:39.889 Validate MD5 checksum, iteration 1 00:37:39.889 10:29:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:37:39.889 10:29:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:37:39.889 10:29:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:37:39.889 10:29:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:37:39.889 10:29:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:37:39.889 10:29:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:37:39.889 [2024-12-09 10:29:10.589674] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:37:39.889 [2024-12-09 10:29:10.589916] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85223 ] 00:37:40.148 [2024-12-09 10:29:10.786580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:40.432 [2024-12-09 10:29:10.947457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:41.846  [2024-12-09T10:29:14.026Z] Copying: 476/1024 [MB] (476 MBps) [2024-12-09T10:29:14.026Z] Copying: 927/1024 [MB] (451 MBps) [2024-12-09T10:29:15.401Z] Copying: 1024/1024 [MB] (average 460 MBps) 00:37:44.604 00:37:44.604 10:29:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:37:44.604 10:29:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:37:47.141 10:29:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:37:47.142 Validate MD5 checksum, iteration 2 00:37:47.142 10:29:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=f15f8d3224ba46d7e1965ebc9801bbaf 00:37:47.142 10:29:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ f15f8d3224ba46d7e1965ebc9801bbaf != \f\1\5\f\8\d\3\2\2\4\b\a\4\6\d\7\e\1\9\6\5\e\b\c\9\8\0\1\b\b\a\f ]] 00:37:47.142 10:29:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:37:47.142 10:29:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:37:47.142 10:29:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:37:47.142 10:29:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:37:47.142 10:29:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:37:47.142 10:29:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:37:47.142 10:29:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:37:47.142 10:29:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:37:47.142 10:29:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:37:47.142 [2024-12-09 10:29:17.588555] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:37:47.142 [2024-12-09 10:29:17.588812] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85290 ] 00:37:47.142 [2024-12-09 10:29:17.786954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:47.404 [2024-12-09 10:29:17.949376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:49.315  [2024-12-09T10:29:20.680Z] Copying: 463/1024 [MB] (463 MBps) [2024-12-09T10:29:20.937Z] Copying: 922/1024 [MB] (459 MBps) [2024-12-09T10:29:22.311Z] Copying: 1024/1024 [MB] (average 463 MBps) 00:37:51.514 00:37:51.773 10:29:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:37:51.773 10:29:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:37:54.308 10:29:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:37:54.308 10:29:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=a24d83cbecb8043da2f316bbb1765311 00:37:54.308 10:29:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ a24d83cbecb8043da2f316bbb1765311 != \a\2\4\d\8\3\c\b\e\c\b\8\0\4\3\d\a\2\f\3\1\6\b\b\b\1\7\6\5\3\1\1 ]] 00:37:54.308 10:29:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:37:54.308 10:29:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:37:54.308 10:29:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:37:54.308 10:29:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:37:54.308 10:29:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:37:54.308 10:29:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:37:54.308 10:29:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:37:54.308 10:29:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:37:54.308 10:29:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:37:54.308 10:29:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:37:54.308 10:29:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 85181 ]] 00:37:54.308 10:29:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 85181 00:37:54.308 10:29:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 85181 ']' 00:37:54.308 10:29:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 85181 00:37:54.308 10:29:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:37:54.308 10:29:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:54.308 10:29:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85181 00:37:54.308 killing process with pid 85181 00:37:54.308 10:29:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:54.308 10:29:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:54.308 10:29:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85181' 00:37:54.308 10:29:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 85181 00:37:54.308 10:29:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 85181 00:37:55.266 [2024-12-09 10:29:25.760169] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:37:55.266 [2024-12-09 10:29:25.780460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:55.266 [2024-12-09 10:29:25.780524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:37:55.266 [2024-12-09 10:29:25.780544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:37:55.266 [2024-12-09 10:29:25.780556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.266 [2024-12-09 10:29:25.780584] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:37:55.266 [2024-12-09 10:29:25.785026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:55.266 [2024-12-09 10:29:25.785110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:37:55.266 [2024-12-09 10:29:25.785150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.420 ms 00:37:55.266 [2024-12-09 10:29:25.785185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.266 [2024-12-09 10:29:25.785518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:55.266 [2024-12-09 10:29:25.785548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:37:55.266 [2024-12-09 10:29:25.785567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.293 ms 00:37:55.266 [2024-12-09 10:29:25.785578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.266 [2024-12-09 10:29:25.786916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:55.266 [2024-12-09 10:29:25.787002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:37:55.266 [2024-12-09 10:29:25.787053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.315 ms 00:37:55.266 [2024-12-09 10:29:25.787074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.266 [2024-12-09 10:29:25.788499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:55.266 [2024-12-09 10:29:25.788554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:37:55.266 [2024-12-09 10:29:25.788569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.347 ms 00:37:55.266 [2024-12-09 10:29:25.788580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.266 [2024-12-09 10:29:25.802087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:55.266 [2024-12-09 10:29:25.802163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:37:55.266 [2024-12-09 10:29:25.802203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.466 ms 00:37:55.266 [2024-12-09 10:29:25.802214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.266 [2024-12-09 10:29:25.809264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:55.266 [2024-12-09 10:29:25.809341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:37:55.266 [2024-12-09 10:29:25.809357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.000 ms 00:37:55.266 [2024-12-09 10:29:25.809369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.266 [2024-12-09 10:29:25.809494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:55.266 [2024-12-09 10:29:25.809514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:37:55.267 [2024-12-09 10:29:25.809549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.086 ms 00:37:55.267 [2024-12-09 10:29:25.809593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.267 [2024-12-09 10:29:25.821183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:55.267 [2024-12-09 10:29:25.821252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:37:55.267 [2024-12-09 10:29:25.821267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.566 ms 00:37:55.267 [2024-12-09 10:29:25.821278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.267 [2024-12-09 10:29:25.833757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:55.267 [2024-12-09 10:29:25.833796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:37:55.267 [2024-12-09 10:29:25.833811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.440 ms 00:37:55.267 [2024-12-09 10:29:25.833822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.267 [2024-12-09 10:29:25.845932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:55.267 [2024-12-09 10:29:25.845990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:37:55.267 [2024-12-09 10:29:25.846007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.058 ms 00:37:55.267 [2024-12-09 10:29:25.846017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.267 [2024-12-09 10:29:25.857134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:55.267 [2024-12-09 10:29:25.857184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:37:55.267 [2024-12-09 10:29:25.857197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.025 ms 00:37:55.267 [2024-12-09 10:29:25.857207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.267 [2024-12-09 10:29:25.857244] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:37:55.267 [2024-12-09 10:29:25.857265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:37:55.267 [2024-12-09 10:29:25.857279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:37:55.267 [2024-12-09 10:29:25.857290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:37:55.267 [2024-12-09 10:29:25.857301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:37:55.267 [2024-12-09 10:29:25.857312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:37:55.267 [2024-12-09 10:29:25.857323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:37:55.267 [2024-12-09 10:29:25.857333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:37:55.267 [2024-12-09 10:29:25.857343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:37:55.267 [2024-12-09 10:29:25.857353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:37:55.267 [2024-12-09 10:29:25.857364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:37:55.267 [2024-12-09 10:29:25.857375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:37:55.267 [2024-12-09 10:29:25.857385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:37:55.267 [2024-12-09 10:29:25.857396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:37:55.267 [2024-12-09 10:29:25.857406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:37:55.267 [2024-12-09 10:29:25.857416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:37:55.267 [2024-12-09 10:29:25.857442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:37:55.267 [2024-12-09 10:29:25.857469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:37:55.267 [2024-12-09 10:29:25.857497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:37:55.267 [2024-12-09 10:29:25.857511] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:37:55.267 [2024-12-09 10:29:25.857522] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 802f82c2-d2ee-43f6-a065-5842431e7a2d 00:37:55.267 [2024-12-09 10:29:25.857535] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:37:55.267 [2024-12-09 10:29:25.857546] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:37:55.267 [2024-12-09 10:29:25.857556] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:37:55.267 [2024-12-09 10:29:25.857567] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:37:55.267 [2024-12-09 10:29:25.857577] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:37:55.267 [2024-12-09 10:29:25.857596] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:37:55.267 [2024-12-09 10:29:25.857607] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:37:55.267 [2024-12-09 10:29:25.857617] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:37:55.267 [2024-12-09 10:29:25.857626] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:37:55.267 [2024-12-09 10:29:25.857637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:55.267 [2024-12-09 10:29:25.857658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:37:55.267 [2024-12-09 10:29:25.857681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.394 ms 00:37:55.267 [2024-12-09 10:29:25.857692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.267 [2024-12-09 10:29:25.874032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:55.267 [2024-12-09 10:29:25.874084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:37:55.267 [2024-12-09 10:29:25.874100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.290 ms 00:37:55.267 [2024-12-09 10:29:25.874119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.267 [2024-12-09 10:29:25.874695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:55.267 [2024-12-09 10:29:25.874720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:37:55.267 [2024-12-09 10:29:25.874745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.549 ms 00:37:55.267 [2024-12-09 10:29:25.874757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.267 [2024-12-09 10:29:25.929916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:55.267 [2024-12-09 10:29:25.929993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:37:55.267 [2024-12-09 10:29:25.930018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:55.267 [2024-12-09 10:29:25.930030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.267 [2024-12-09 10:29:25.930096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:55.267 [2024-12-09 10:29:25.930110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:37:55.267 [2024-12-09 10:29:25.930121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:55.267 [2024-12-09 10:29:25.930131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.268 [2024-12-09 10:29:25.930261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:55.268 [2024-12-09 10:29:25.930295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:37:55.268 [2024-12-09 10:29:25.930323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:55.268 [2024-12-09 10:29:25.930333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.268 [2024-12-09 10:29:25.930364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:55.268 [2024-12-09 10:29:25.930377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:37:55.268 [2024-12-09 10:29:25.930388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:55.268 [2024-12-09 10:29:25.930398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.268 [2024-12-09 10:29:26.029800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:55.268 [2024-12-09 10:29:26.029906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:37:55.268 [2024-12-09 10:29:26.029926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:55.268 [2024-12-09 10:29:26.029964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.538 [2024-12-09 10:29:26.117021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:55.538 [2024-12-09 10:29:26.117111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:37:55.538 [2024-12-09 10:29:26.117132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:55.538 [2024-12-09 10:29:26.117144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.538 [2024-12-09 10:29:26.117294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:55.538 [2024-12-09 10:29:26.117327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:37:55.538 [2024-12-09 10:29:26.117339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:55.538 [2024-12-09 10:29:26.117364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.538 [2024-12-09 10:29:26.117436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:55.538 [2024-12-09 10:29:26.117465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:37:55.538 [2024-12-09 10:29:26.117492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:55.538 [2024-12-09 10:29:26.117524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.538 [2024-12-09 10:29:26.117669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:55.538 [2024-12-09 10:29:26.117687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:37:55.538 [2024-12-09 10:29:26.117699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:55.539 [2024-12-09 10:29:26.117709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.539 [2024-12-09 10:29:26.117792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:55.539 [2024-12-09 10:29:26.117816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:37:55.539 [2024-12-09 10:29:26.117828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:55.539 [2024-12-09 10:29:26.117839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.539 [2024-12-09 10:29:26.117889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:55.539 [2024-12-09 10:29:26.117904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:37:55.539 [2024-12-09 10:29:26.117916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:55.539 [2024-12-09 10:29:26.117927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.539 [2024-12-09 10:29:26.118010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:55.539 [2024-12-09 10:29:26.118028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:37:55.539 [2024-12-09 10:29:26.118040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:55.539 [2024-12-09 10:29:26.118051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.539 [2024-12-09 10:29:26.118269] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 337.763 ms, result 0 00:37:56.917 10:29:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:37:56.917 10:29:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:37:56.917 10:29:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:37:56.917 10:29:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:37:56.917 10:29:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:37:56.917 10:29:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:37:56.917 Remove shared memory files 00:37:56.917 10:29:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:37:56.917 10:29:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:37:56.917 10:29:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:37:56.917 10:29:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:37:56.917 10:29:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid84950 00:37:56.917 10:29:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:37:56.917 10:29:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:37:56.917 ************************************ 00:37:56.917 END TEST ftl_upgrade_shutdown 00:37:56.917 ************************************ 00:37:56.917 00:37:56.917 real 1m35.398s 00:37:56.917 user 2m14.483s 00:37:56.917 sys 0m25.594s 00:37:56.917 10:29:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:56.917 10:29:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:37:56.917 Process with pid 77246 is not found 00:37:56.917 10:29:27 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:37:56.917 10:29:27 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:37:56.917 10:29:27 ftl -- ftl/ftl.sh@14 -- # killprocess 77246 00:37:56.917 10:29:27 ftl -- common/autotest_common.sh@954 -- # '[' -z 77246 ']' 00:37:56.917 10:29:27 ftl -- common/autotest_common.sh@958 -- # kill -0 77246 00:37:56.917 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (77246) - No such process 00:37:56.917 10:29:27 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 77246 is not found' 00:37:56.917 10:29:27 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:37:56.917 10:29:27 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=85421 00:37:56.917 10:29:27 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:37:56.917 10:29:27 ftl -- ftl/ftl.sh@20 -- # waitforlisten 85421 00:37:56.917 10:29:27 ftl -- common/autotest_common.sh@835 -- # '[' -z 85421 ']' 00:37:56.917 10:29:27 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:56.917 10:29:27 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:56.917 10:29:27 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:56.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:56.917 10:29:27 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:56.917 10:29:27 ftl -- common/autotest_common.sh@10 -- # set +x 00:37:56.917 [2024-12-09 10:29:27.600617] Starting SPDK v25.01-pre git sha1 b4f857a04 / DPDK 24.03.0 initialization... 00:37:56.917 [2024-12-09 10:29:27.600806] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85421 ] 00:37:57.176 [2024-12-09 10:29:27.772298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:57.176 [2024-12-09 10:29:27.907583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:58.111 10:29:28 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:58.111 10:29:28 ftl -- common/autotest_common.sh@868 -- # return 0 00:37:58.111 10:29:28 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:37:58.370 nvme0n1 00:37:58.370 10:29:29 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:37:58.370 10:29:29 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:37:58.370 10:29:29 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:37:58.939 10:29:29 ftl -- ftl/common.sh@28 -- # stores=ef105c41-5827-4490-aae6-1ee31243a77b 00:37:58.939 10:29:29 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:37:58.939 10:29:29 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ef105c41-5827-4490-aae6-1ee31243a77b 00:37:59.198 10:29:29 ftl -- ftl/ftl.sh@23 -- # killprocess 85421 00:37:59.198 10:29:29 ftl -- common/autotest_common.sh@954 -- # '[' -z 85421 ']' 00:37:59.198 10:29:29 ftl -- common/autotest_common.sh@958 -- # kill -0 85421 00:37:59.198 10:29:29 ftl -- common/autotest_common.sh@959 -- # uname 00:37:59.198 10:29:29 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:59.198 10:29:29 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85421 00:37:59.198 killing process with pid 85421 00:37:59.198 10:29:29 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:59.198 10:29:29 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:59.198 10:29:29 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85421' 00:37:59.198 10:29:29 ftl -- common/autotest_common.sh@973 -- # kill 85421 00:37:59.198 10:29:29 ftl -- common/autotest_common.sh@978 -- # wait 85421 00:38:01.731 10:29:32 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:38:01.731 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:38:01.731 Waiting for block devices as requested 00:38:01.990 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:38:01.990 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:38:01.990 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:38:02.248 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:38:07.518 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:38:07.518 10:29:37 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:38:07.518 Remove shared memory files 00:38:07.518 10:29:37 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:38:07.518 10:29:37 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:38:07.518 10:29:37 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:38:07.518 10:29:37 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:38:07.518 10:29:37 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:38:07.518 10:29:37 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:38:07.518 00:38:07.518 real 12m36.775s 00:38:07.518 user 15m42.337s 00:38:07.518 sys 1m38.193s 00:38:07.518 10:29:37 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:07.518 10:29:38 ftl -- common/autotest_common.sh@10 -- # set +x 00:38:07.518 ************************************ 00:38:07.518 END TEST ftl 00:38:07.518 ************************************ 00:38:07.518 10:29:38 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:38:07.518 10:29:38 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:38:07.518 10:29:38 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:38:07.518 10:29:38 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:38:07.518 10:29:38 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:38:07.518 10:29:38 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:38:07.518 10:29:38 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:38:07.518 10:29:38 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:38:07.518 10:29:38 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:38:07.518 10:29:38 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:38:07.518 10:29:38 -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:07.518 10:29:38 -- common/autotest_common.sh@10 -- # set +x 00:38:07.519 10:29:38 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:38:07.519 10:29:38 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:38:07.519 10:29:38 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:38:07.519 10:29:38 -- common/autotest_common.sh@10 -- # set +x 00:38:08.895 INFO: APP EXITING 00:38:08.895 INFO: killing all VMs 00:38:08.895 INFO: killing vhost app 00:38:08.895 INFO: EXIT DONE 00:38:09.154 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:38:09.721 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:38:09.721 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:38:09.721 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:38:09.721 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:38:10.288 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:38:10.547 Cleaning 00:38:10.547 Removing: /var/run/dpdk/spdk0/config 00:38:10.547 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:38:10.547 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:38:10.547 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:38:10.547 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:38:10.547 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:38:10.547 Removing: /var/run/dpdk/spdk0/hugepage_info 00:38:10.547 Removing: /var/run/dpdk/spdk0 00:38:10.547 Removing: /var/run/dpdk/spdk_pid57857 00:38:10.547 Removing: /var/run/dpdk/spdk_pid58098 00:38:10.547 Removing: /var/run/dpdk/spdk_pid58332 00:38:10.547 Removing: /var/run/dpdk/spdk_pid58442 00:38:10.547 Removing: /var/run/dpdk/spdk_pid58498 00:38:10.547 Removing: /var/run/dpdk/spdk_pid58637 00:38:10.547 Removing: /var/run/dpdk/spdk_pid58660 00:38:10.547 Removing: /var/run/dpdk/spdk_pid58876 00:38:10.547 Removing: /var/run/dpdk/spdk_pid58982 00:38:10.547 Removing: /var/run/dpdk/spdk_pid59100 00:38:10.547 Removing: /var/run/dpdk/spdk_pid59228 00:38:10.547 Removing: /var/run/dpdk/spdk_pid59341 00:38:10.547 Removing: /var/run/dpdk/spdk_pid59381 00:38:10.547 Removing: /var/run/dpdk/spdk_pid59417 00:38:10.547 Removing: /var/run/dpdk/spdk_pid59493 00:38:10.547 Removing: /var/run/dpdk/spdk_pid59610 00:38:10.547 Removing: /var/run/dpdk/spdk_pid60101 00:38:10.547 Removing: /var/run/dpdk/spdk_pid60177 00:38:10.547 Removing: /var/run/dpdk/spdk_pid60252 00:38:10.547 Removing: /var/run/dpdk/spdk_pid60274 00:38:10.547 Removing: /var/run/dpdk/spdk_pid60433 00:38:10.547 Removing: /var/run/dpdk/spdk_pid60460 00:38:10.547 Removing: /var/run/dpdk/spdk_pid60619 00:38:10.547 Removing: /var/run/dpdk/spdk_pid60635 00:38:10.547 Removing: /var/run/dpdk/spdk_pid60710 00:38:10.547 Removing: /var/run/dpdk/spdk_pid60738 00:38:10.547 Removing: /var/run/dpdk/spdk_pid60803 00:38:10.547 Removing: /var/run/dpdk/spdk_pid60821 00:38:10.547 Removing: /var/run/dpdk/spdk_pid61026 00:38:10.547 Removing: /var/run/dpdk/spdk_pid61064 00:38:10.547 Removing: /var/run/dpdk/spdk_pid61153 00:38:10.547 Removing: /var/run/dpdk/spdk_pid61347 00:38:10.547 Removing: /var/run/dpdk/spdk_pid61453 00:38:10.547 Removing: /var/run/dpdk/spdk_pid61495 00:38:10.547 Removing: /var/run/dpdk/spdk_pid61990 00:38:10.547 Removing: /var/run/dpdk/spdk_pid62098 00:38:10.547 Removing: /var/run/dpdk/spdk_pid62214 00:38:10.547 Removing: /var/run/dpdk/spdk_pid62267 00:38:10.547 Removing: /var/run/dpdk/spdk_pid62298 00:38:10.547 Removing: /var/run/dpdk/spdk_pid62382 00:38:10.547 Removing: /var/run/dpdk/spdk_pid63025 00:38:10.547 Removing: /var/run/dpdk/spdk_pid63067 00:38:10.547 Removing: /var/run/dpdk/spdk_pid63599 00:38:10.547 Removing: /var/run/dpdk/spdk_pid63710 00:38:10.806 Removing: /var/run/dpdk/spdk_pid63825 00:38:10.806 Removing: /var/run/dpdk/spdk_pid63883 00:38:10.806 Removing: /var/run/dpdk/spdk_pid63909 00:38:10.806 Removing: /var/run/dpdk/spdk_pid63940 00:38:10.806 Removing: /var/run/dpdk/spdk_pid65856 00:38:10.806 Removing: /var/run/dpdk/spdk_pid66010 00:38:10.807 Removing: /var/run/dpdk/spdk_pid66018 00:38:10.807 Removing: /var/run/dpdk/spdk_pid66031 00:38:10.807 Removing: /var/run/dpdk/spdk_pid66079 00:38:10.807 Removing: /var/run/dpdk/spdk_pid66083 00:38:10.807 Removing: /var/run/dpdk/spdk_pid66105 00:38:10.807 Removing: /var/run/dpdk/spdk_pid66145 00:38:10.807 Removing: /var/run/dpdk/spdk_pid66149 00:38:10.807 Removing: /var/run/dpdk/spdk_pid66172 00:38:10.807 Removing: /var/run/dpdk/spdk_pid66211 00:38:10.807 Removing: /var/run/dpdk/spdk_pid66217 00:38:10.807 Removing: /var/run/dpdk/spdk_pid66240 00:38:10.807 Removing: /var/run/dpdk/spdk_pid67641 00:38:10.807 Removing: /var/run/dpdk/spdk_pid67760 00:38:10.807 Removing: /var/run/dpdk/spdk_pid69175 00:38:10.807 Removing: /var/run/dpdk/spdk_pid70913 00:38:10.807 Removing: /var/run/dpdk/spdk_pid70998 00:38:10.807 Removing: /var/run/dpdk/spdk_pid71080 00:38:10.807 Removing: /var/run/dpdk/spdk_pid71190 00:38:10.807 Removing: /var/run/dpdk/spdk_pid71294 00:38:10.807 Removing: /var/run/dpdk/spdk_pid71395 00:38:10.807 Removing: /var/run/dpdk/spdk_pid71482 00:38:10.807 Removing: /var/run/dpdk/spdk_pid71568 00:38:10.807 Removing: /var/run/dpdk/spdk_pid71681 00:38:10.807 Removing: /var/run/dpdk/spdk_pid71777 00:38:10.807 Removing: /var/run/dpdk/spdk_pid71880 00:38:10.807 Removing: /var/run/dpdk/spdk_pid71965 00:38:10.807 Removing: /var/run/dpdk/spdk_pid72040 00:38:10.807 Removing: /var/run/dpdk/spdk_pid72150 00:38:10.807 Removing: /var/run/dpdk/spdk_pid72249 00:38:10.807 Removing: /var/run/dpdk/spdk_pid72345 00:38:10.807 Removing: /var/run/dpdk/spdk_pid72431 00:38:10.807 Removing: /var/run/dpdk/spdk_pid72515 00:38:10.807 Removing: /var/run/dpdk/spdk_pid72620 00:38:10.807 Removing: /var/run/dpdk/spdk_pid72719 00:38:10.807 Removing: /var/run/dpdk/spdk_pid72820 00:38:10.807 Removing: /var/run/dpdk/spdk_pid72904 00:38:10.807 Removing: /var/run/dpdk/spdk_pid72985 00:38:10.807 Removing: /var/run/dpdk/spdk_pid73066 00:38:10.807 Removing: /var/run/dpdk/spdk_pid73138 00:38:10.807 Removing: /var/run/dpdk/spdk_pid73247 00:38:10.807 Removing: /var/run/dpdk/spdk_pid73348 00:38:10.807 Removing: /var/run/dpdk/spdk_pid73444 00:38:10.807 Removing: /var/run/dpdk/spdk_pid73524 00:38:10.807 Removing: /var/run/dpdk/spdk_pid73598 00:38:10.807 Removing: /var/run/dpdk/spdk_pid73684 00:38:10.807 Removing: /var/run/dpdk/spdk_pid73760 00:38:10.807 Removing: /var/run/dpdk/spdk_pid73869 00:38:10.807 Removing: /var/run/dpdk/spdk_pid73963 00:38:10.807 Removing: /var/run/dpdk/spdk_pid74112 00:38:10.807 Removing: /var/run/dpdk/spdk_pid74412 00:38:10.807 Removing: /var/run/dpdk/spdk_pid74453 00:38:10.807 Removing: /var/run/dpdk/spdk_pid74947 00:38:10.807 Removing: /var/run/dpdk/spdk_pid75126 00:38:10.807 Removing: /var/run/dpdk/spdk_pid75225 00:38:10.807 Removing: /var/run/dpdk/spdk_pid75339 00:38:10.807 Removing: /var/run/dpdk/spdk_pid75405 00:38:10.807 Removing: /var/run/dpdk/spdk_pid75429 00:38:10.807 Removing: /var/run/dpdk/spdk_pid75720 00:38:10.807 Removing: /var/run/dpdk/spdk_pid75792 00:38:10.807 Removing: /var/run/dpdk/spdk_pid75883 00:38:10.807 Removing: /var/run/dpdk/spdk_pid76303 00:38:10.807 Removing: /var/run/dpdk/spdk_pid76455 00:38:10.807 Removing: /var/run/dpdk/spdk_pid77246 00:38:10.807 Removing: /var/run/dpdk/spdk_pid77396 00:38:10.807 Removing: /var/run/dpdk/spdk_pid77609 00:38:10.807 Removing: /var/run/dpdk/spdk_pid77713 00:38:10.807 Removing: /var/run/dpdk/spdk_pid78088 00:38:10.807 Removing: /var/run/dpdk/spdk_pid78370 00:38:10.807 Removing: /var/run/dpdk/spdk_pid78733 00:38:10.807 Removing: /var/run/dpdk/spdk_pid78945 00:38:10.807 Removing: /var/run/dpdk/spdk_pid79098 00:38:10.807 Removing: /var/run/dpdk/spdk_pid79167 00:38:10.807 Removing: /var/run/dpdk/spdk_pid79327 00:38:10.807 Removing: /var/run/dpdk/spdk_pid79358 00:38:10.807 Removing: /var/run/dpdk/spdk_pid79433 00:38:10.807 Removing: /var/run/dpdk/spdk_pid79648 00:38:10.807 Removing: /var/run/dpdk/spdk_pid79904 00:38:10.807 Removing: /var/run/dpdk/spdk_pid80363 00:38:10.807 Removing: /var/run/dpdk/spdk_pid80801 00:38:11.066 Removing: /var/run/dpdk/spdk_pid81259 00:38:11.066 Removing: /var/run/dpdk/spdk_pid81851 00:38:11.066 Removing: /var/run/dpdk/spdk_pid81999 00:38:11.066 Removing: /var/run/dpdk/spdk_pid82092 00:38:11.066 Removing: /var/run/dpdk/spdk_pid82808 00:38:11.066 Removing: /var/run/dpdk/spdk_pid82878 00:38:11.066 Removing: /var/run/dpdk/spdk_pid83372 00:38:11.066 Removing: /var/run/dpdk/spdk_pid83794 00:38:11.066 Removing: /var/run/dpdk/spdk_pid84364 00:38:11.066 Removing: /var/run/dpdk/spdk_pid84494 00:38:11.066 Removing: /var/run/dpdk/spdk_pid84537 00:38:11.066 Removing: /var/run/dpdk/spdk_pid84607 00:38:11.066 Removing: /var/run/dpdk/spdk_pid84673 00:38:11.066 Removing: /var/run/dpdk/spdk_pid84738 00:38:11.066 Removing: /var/run/dpdk/spdk_pid84950 00:38:11.066 Removing: /var/run/dpdk/spdk_pid85026 00:38:11.066 Removing: /var/run/dpdk/spdk_pid85104 00:38:11.066 Removing: /var/run/dpdk/spdk_pid85181 00:38:11.066 Removing: /var/run/dpdk/spdk_pid85223 00:38:11.066 Removing: /var/run/dpdk/spdk_pid85290 00:38:11.066 Removing: /var/run/dpdk/spdk_pid85421 00:38:11.066 Clean 00:38:11.066 10:29:41 -- common/autotest_common.sh@1453 -- # return 0 00:38:11.066 10:29:41 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:38:11.066 10:29:41 -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:11.066 10:29:41 -- common/autotest_common.sh@10 -- # set +x 00:38:11.066 10:29:41 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:38:11.066 10:29:41 -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:11.066 10:29:41 -- common/autotest_common.sh@10 -- # set +x 00:38:11.066 10:29:41 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:38:11.066 10:29:41 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:38:11.066 10:29:41 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:38:11.066 10:29:41 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:38:11.066 10:29:41 -- spdk/autotest.sh@398 -- # hostname 00:38:11.066 10:29:41 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:38:11.324 geninfo: WARNING: invalid characters removed from testname! 00:38:43.412 10:30:12 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:38:45.941 10:30:16 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:38:49.225 10:30:19 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:38:51.756 10:30:22 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:38:55.043 10:30:25 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:38:57.615 10:30:28 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:39:00.901 10:30:30 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:39:00.901 10:30:30 -- spdk/autorun.sh@1 -- $ timing_finish 00:39:00.901 10:30:30 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:39:00.901 10:30:30 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:39:00.901 10:30:30 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:39:00.901 10:30:30 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:39:00.901 + [[ -n 5396 ]] 00:39:00.901 + sudo kill 5396 00:39:00.910 [Pipeline] } 00:39:00.925 [Pipeline] // timeout 00:39:00.930 [Pipeline] } 00:39:00.945 [Pipeline] // stage 00:39:00.950 [Pipeline] } 00:39:00.965 [Pipeline] // catchError 00:39:00.973 [Pipeline] stage 00:39:00.975 [Pipeline] { (Stop VM) 00:39:00.988 [Pipeline] sh 00:39:01.268 + vagrant halt 00:39:05.453 ==> default: Halting domain... 00:39:12.035 [Pipeline] sh 00:39:12.316 + vagrant destroy -f 00:39:16.506 ==> default: Removing domain... 00:39:16.518 [Pipeline] sh 00:39:16.800 + mv output /var/jenkins/workspace/nvme-vg-autotest_3/output 00:39:16.810 [Pipeline] } 00:39:16.828 [Pipeline] // stage 00:39:16.834 [Pipeline] } 00:39:16.849 [Pipeline] // dir 00:39:16.856 [Pipeline] } 00:39:16.873 [Pipeline] // wrap 00:39:16.880 [Pipeline] } 00:39:16.896 [Pipeline] // catchError 00:39:16.908 [Pipeline] stage 00:39:16.910 [Pipeline] { (Epilogue) 00:39:16.925 [Pipeline] sh 00:39:17.210 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:39:25.394 [Pipeline] catchError 00:39:25.396 [Pipeline] { 00:39:25.410 [Pipeline] sh 00:39:25.696 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:39:25.696 Artifacts sizes are good 00:39:25.705 [Pipeline] } 00:39:25.719 [Pipeline] // catchError 00:39:25.731 [Pipeline] archiveArtifacts 00:39:25.738 Archiving artifacts 00:39:25.846 [Pipeline] cleanWs 00:39:25.861 [WS-CLEANUP] Deleting project workspace... 00:39:25.862 [WS-CLEANUP] Deferred wipeout is used... 00:39:25.869 [WS-CLEANUP] done 00:39:25.870 [Pipeline] } 00:39:25.888 [Pipeline] // stage 00:39:25.896 [Pipeline] } 00:39:25.912 [Pipeline] // node 00:39:25.919 [Pipeline] End of Pipeline 00:39:25.955 Finished: SUCCESS