00:00:00.000 Started by upstream project "autotest-per-patch" build number 132713 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.083 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.084 The recommended git tool is: git 00:00:00.084 using credential 00000000-0000-0000-0000-000000000002 00:00:00.086 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.137 Fetching changes from the remote Git repository 00:00:00.144 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.188 Using shallow fetch with depth 1 00:00:00.188 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.188 > git --version # timeout=10 00:00:00.229 > git --version # 'git version 2.39.2' 00:00:00.229 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.247 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.247 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.547 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.561 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.573 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.573 > git config core.sparsecheckout # timeout=10 00:00:06.588 > git read-tree -mu HEAD # timeout=10 00:00:06.607 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.630 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.630 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.748 [Pipeline] Start of Pipeline 00:00:06.761 [Pipeline] library 00:00:06.762 Loading library shm_lib@master 00:00:06.762 Library shm_lib@master is cached. Copying from home. 00:00:06.776 [Pipeline] node 00:00:06.787 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest_2 00:00:06.788 [Pipeline] { 00:00:06.797 [Pipeline] catchError 00:00:06.798 [Pipeline] { 00:00:06.814 [Pipeline] wrap 00:00:06.823 [Pipeline] { 00:00:06.831 [Pipeline] stage 00:00:06.832 [Pipeline] { (Prologue) 00:00:06.848 [Pipeline] echo 00:00:06.849 Node: VM-host-SM38 00:00:06.854 [Pipeline] cleanWs 00:00:06.863 [WS-CLEANUP] Deleting project workspace... 00:00:06.863 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.870 [WS-CLEANUP] done 00:00:07.061 [Pipeline] setCustomBuildProperty 00:00:07.155 [Pipeline] httpRequest 00:00:07.539 [Pipeline] echo 00:00:07.541 Sorcerer 10.211.164.20 is alive 00:00:07.550 [Pipeline] retry 00:00:07.552 [Pipeline] { 00:00:07.563 [Pipeline] httpRequest 00:00:07.569 HttpMethod: GET 00:00:07.569 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.570 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.582 Response Code: HTTP/1.1 200 OK 00:00:07.583 Success: Status code 200 is in the accepted range: 200,404 00:00:07.584 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.913 [Pipeline] } 00:00:08.927 [Pipeline] // retry 00:00:08.933 [Pipeline] sh 00:00:09.216 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.229 [Pipeline] httpRequest 00:00:10.036 [Pipeline] echo 00:00:10.039 Sorcerer 10.211.164.20 is alive 00:00:10.049 [Pipeline] retry 00:00:10.052 [Pipeline] { 00:00:10.062 [Pipeline] httpRequest 00:00:10.067 HttpMethod: GET 00:00:10.068 URL: http://10.211.164.20/packages/spdk_0b1b15acc6e4930953ba62f7aa9503a96fe91c93.tar.gz 00:00:10.068 Sending request to url: http://10.211.164.20/packages/spdk_0b1b15acc6e4930953ba62f7aa9503a96fe91c93.tar.gz 00:00:10.083 Response Code: HTTP/1.1 200 OK 00:00:10.084 Success: Status code 200 is in the accepted range: 200,404 00:00:10.084 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/spdk_0b1b15acc6e4930953ba62f7aa9503a96fe91c93.tar.gz 00:04:46.146 [Pipeline] } 00:04:46.167 [Pipeline] // retry 00:04:46.176 [Pipeline] sh 00:04:46.460 + tar --no-same-owner -xf spdk_0b1b15acc6e4930953ba62f7aa9503a96fe91c93.tar.gz 00:04:48.999 [Pipeline] sh 00:04:49.278 + git -C spdk log --oneline -n5 00:04:49.278 0b1b15acc lib/reduce: Support storing metadata on backing dev. (5 of 5, test cases) 00:04:49.278 20bebc997 lib/reduce: Support storing metadata on backing dev. (4 of 5, data unmap with async metadata) 00:04:49.278 3fb854a13 lib/reduce: Support storing metadata on backing dev. (3 of 5, reload process) 00:04:49.278 f501a7223 lib/reduce: Support storing metadata on backing dev. (2 of 5, data r/w with async metadata) 00:04:49.278 8ffb12d0f lib/reduce: Support storing metadata on backing dev. (1 of 5, struct define and init process) 00:04:49.301 [Pipeline] writeFile 00:04:49.317 [Pipeline] sh 00:04:49.596 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:04:49.607 [Pipeline] sh 00:04:49.885 + cat autorun-spdk.conf 00:04:49.886 SPDK_RUN_FUNCTIONAL_TEST=1 00:04:49.886 SPDK_TEST_NVME=1 00:04:49.886 SPDK_TEST_FTL=1 00:04:49.886 SPDK_TEST_ISAL=1 00:04:49.886 SPDK_RUN_ASAN=1 00:04:49.886 SPDK_RUN_UBSAN=1 00:04:49.886 SPDK_TEST_XNVME=1 00:04:49.886 SPDK_TEST_NVME_FDP=1 00:04:49.886 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:49.891 RUN_NIGHTLY=0 00:04:49.894 [Pipeline] } 00:04:49.908 [Pipeline] // stage 00:04:49.926 [Pipeline] stage 00:04:49.928 [Pipeline] { (Run VM) 00:04:49.941 [Pipeline] sh 00:04:50.221 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:04:50.221 + echo 'Start stage prepare_nvme.sh' 00:04:50.221 Start stage prepare_nvme.sh 00:04:50.221 + [[ -n 3 ]] 00:04:50.221 + disk_prefix=ex3 00:04:50.221 + [[ -n /var/jenkins/workspace/nvme-vg-autotest_2 ]] 00:04:50.221 + [[ -e /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf ]] 00:04:50.221 + source /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf 00:04:50.221 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:50.221 ++ SPDK_TEST_NVME=1 00:04:50.221 ++ SPDK_TEST_FTL=1 00:04:50.221 ++ SPDK_TEST_ISAL=1 00:04:50.221 ++ SPDK_RUN_ASAN=1 00:04:50.221 ++ SPDK_RUN_UBSAN=1 00:04:50.221 ++ SPDK_TEST_XNVME=1 00:04:50.221 ++ SPDK_TEST_NVME_FDP=1 00:04:50.221 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:50.221 ++ RUN_NIGHTLY=0 00:04:50.221 + cd /var/jenkins/workspace/nvme-vg-autotest_2 00:04:50.221 + nvme_files=() 00:04:50.221 + declare -A nvme_files 00:04:50.221 + backend_dir=/var/lib/libvirt/images/backends 00:04:50.221 + nvme_files['nvme.img']=5G 00:04:50.221 + nvme_files['nvme-cmb.img']=5G 00:04:50.221 + nvme_files['nvme-multi0.img']=4G 00:04:50.221 + nvme_files['nvme-multi1.img']=4G 00:04:50.221 + nvme_files['nvme-multi2.img']=4G 00:04:50.221 + nvme_files['nvme-openstack.img']=8G 00:04:50.221 + nvme_files['nvme-zns.img']=5G 00:04:50.221 + (( SPDK_TEST_NVME_PMR == 1 )) 00:04:50.221 + (( SPDK_TEST_FTL == 1 )) 00:04:50.221 + nvme_files["nvme-ftl.img"]=6G 00:04:50.221 + (( SPDK_TEST_NVME_FDP == 1 )) 00:04:50.221 + nvme_files["nvme-fdp.img"]=1G 00:04:50.221 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:04:50.221 + for nvme in "${!nvme_files[@]}" 00:04:50.221 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi2.img -s 4G 00:04:50.221 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:04:50.221 + for nvme in "${!nvme_files[@]}" 00:04:50.221 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-ftl.img -s 6G 00:04:50.786 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:04:50.786 + for nvme in "${!nvme_files[@]}" 00:04:50.786 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-cmb.img -s 5G 00:04:50.786 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:04:50.786 + for nvme in "${!nvme_files[@]}" 00:04:50.786 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-openstack.img -s 8G 00:04:50.786 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:04:50.786 + for nvme in "${!nvme_files[@]}" 00:04:50.786 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-zns.img -s 5G 00:04:50.786 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:04:50.786 + for nvme in "${!nvme_files[@]}" 00:04:50.786 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi1.img -s 4G 00:04:51.044 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:04:51.044 + for nvme in "${!nvme_files[@]}" 00:04:51.045 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi0.img -s 4G 00:04:51.303 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:04:51.303 + for nvme in "${!nvme_files[@]}" 00:04:51.303 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-fdp.img -s 1G 00:04:51.303 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:04:51.303 + for nvme in "${!nvme_files[@]}" 00:04:51.303 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme.img -s 5G 00:04:51.560 Formatting '/var/lib/libvirt/images/backends/ex3-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:04:51.560 ++ sudo grep -rl ex3-nvme.img /etc/libvirt/qemu 00:04:51.560 + echo 'End stage prepare_nvme.sh' 00:04:51.560 End stage prepare_nvme.sh 00:04:51.572 [Pipeline] sh 00:04:51.850 + DISTRO=fedora39 00:04:51.850 + CPUS=10 00:04:51.850 + RAM=12288 00:04:51.850 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:04:51.850 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex3-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex3-nvme.img -b /var/lib/libvirt/images/backends/ex3-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex3-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:04:51.850 00:04:51.850 DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant 00:04:51.850 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk 00:04:51.850 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest_2 00:04:51.850 HELP=0 00:04:51.850 DRY_RUN=0 00:04:51.850 NVME_FILE=/var/lib/libvirt/images/backends/ex3-nvme-ftl.img,/var/lib/libvirt/images/backends/ex3-nvme.img,/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,/var/lib/libvirt/images/backends/ex3-nvme-fdp.img, 00:04:51.850 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:04:51.850 NVME_AUTO_CREATE=0 00:04:51.850 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,, 00:04:51.850 NVME_CMB=,,,, 00:04:51.850 NVME_PMR=,,,, 00:04:51.850 NVME_ZNS=,,,, 00:04:51.850 NVME_MS=true,,,, 00:04:51.850 NVME_FDP=,,,on, 00:04:51.850 SPDK_VAGRANT_DISTRO=fedora39 00:04:51.850 SPDK_VAGRANT_VMCPU=10 00:04:51.850 SPDK_VAGRANT_VMRAM=12288 00:04:51.850 SPDK_VAGRANT_PROVIDER=libvirt 00:04:51.850 SPDK_VAGRANT_HTTP_PROXY= 00:04:51.850 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:04:51.850 SPDK_OPENSTACK_NETWORK=0 00:04:51.850 VAGRANT_PACKAGE_BOX=0 00:04:51.850 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:04:51.850 FORCE_DISTRO=true 00:04:51.850 VAGRANT_BOX_VERSION= 00:04:51.850 EXTRA_VAGRANTFILES= 00:04:51.850 NIC_MODEL=e1000 00:04:51.850 00:04:51.850 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt' 00:04:51.850 /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest_2 00:04:54.379 Bringing machine 'default' up with 'libvirt' provider... 00:04:54.637 ==> default: Creating image (snapshot of base box volume). 00:04:54.637 ==> default: Creating domain with the following settings... 00:04:54.637 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733466667_3b87c30a1c087ce7e1c7 00:04:54.637 ==> default: -- Domain type: kvm 00:04:54.637 ==> default: -- Cpus: 10 00:04:54.637 ==> default: -- Feature: acpi 00:04:54.637 ==> default: -- Feature: apic 00:04:54.637 ==> default: -- Feature: pae 00:04:54.637 ==> default: -- Memory: 12288M 00:04:54.637 ==> default: -- Memory Backing: hugepages: 00:04:54.637 ==> default: -- Management MAC: 00:04:54.637 ==> default: -- Loader: 00:04:54.637 ==> default: -- Nvram: 00:04:54.637 ==> default: -- Base box: spdk/fedora39 00:04:54.637 ==> default: -- Storage pool: default 00:04:54.637 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733466667_3b87c30a1c087ce7e1c7.img (20G) 00:04:54.637 ==> default: -- Volume Cache: default 00:04:54.637 ==> default: -- Kernel: 00:04:54.637 ==> default: -- Initrd: 00:04:54.637 ==> default: -- Graphics Type: vnc 00:04:54.637 ==> default: -- Graphics Port: -1 00:04:54.637 ==> default: -- Graphics IP: 127.0.0.1 00:04:54.637 ==> default: -- Graphics Password: Not defined 00:04:54.637 ==> default: -- Video Type: cirrus 00:04:54.637 ==> default: -- Video VRAM: 9216 00:04:54.637 ==> default: -- Sound Type: 00:04:54.637 ==> default: -- Keymap: en-us 00:04:54.637 ==> default: -- TPM Path: 00:04:54.637 ==> default: -- INPUT: type=mouse, bus=ps2 00:04:54.637 ==> default: -- Command line args: 00:04:54.637 ==> default: -> value=-device, 00:04:54.637 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:04:54.638 ==> default: -> value=-drive, 00:04:54.638 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:04:54.638 ==> default: -> value=-device, 00:04:54.638 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:04:54.638 ==> default: -> value=-device, 00:04:54.638 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:04:54.638 ==> default: -> value=-drive, 00:04:54.638 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme.img,if=none,id=nvme-1-drive0, 00:04:54.638 ==> default: -> value=-device, 00:04:54.638 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:54.638 ==> default: -> value=-device, 00:04:54.638 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:04:54.638 ==> default: -> value=-drive, 00:04:54.638 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:04:54.638 ==> default: -> value=-device, 00:04:54.638 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:54.638 ==> default: -> value=-drive, 00:04:54.638 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:04:54.638 ==> default: -> value=-device, 00:04:54.638 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:54.638 ==> default: -> value=-drive, 00:04:54.638 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:04:54.638 ==> default: -> value=-device, 00:04:54.638 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:54.638 ==> default: -> value=-device, 00:04:54.638 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:04:54.638 ==> default: -> value=-device, 00:04:54.638 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:04:54.638 ==> default: -> value=-drive, 00:04:54.638 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:04:54.638 ==> default: -> value=-device, 00:04:54.638 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:54.896 ==> default: Creating shared folders metadata... 00:04:54.896 ==> default: Starting domain. 00:04:56.272 ==> default: Waiting for domain to get an IP address... 00:05:11.221 ==> default: Waiting for SSH to become available... 00:05:11.221 ==> default: Configuring and enabling network interfaces... 00:05:14.542 default: SSH address: 192.168.121.48:22 00:05:14.542 default: SSH username: vagrant 00:05:14.542 default: SSH auth method: private key 00:05:16.453 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:05:26.501 ==> default: Mounting SSHFS shared folder... 00:05:27.065 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:05:27.065 ==> default: Checking Mount.. 00:05:27.996 ==> default: Folder Successfully Mounted! 00:05:28.254 00:05:28.254 SUCCESS! 00:05:28.254 00:05:28.254 cd to /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:05:28.254 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:05:28.254 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:05:28.254 00:05:28.261 [Pipeline] } 00:05:28.278 [Pipeline] // stage 00:05:28.285 [Pipeline] dir 00:05:28.286 Running in /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt 00:05:28.287 [Pipeline] { 00:05:28.301 [Pipeline] catchError 00:05:28.303 [Pipeline] { 00:05:28.316 [Pipeline] sh 00:05:28.590 + vagrant ssh-config --host vagrant 00:05:28.590 + sed -ne '/^Host/,$p' 00:05:28.590 + tee ssh_conf 00:05:31.114 Host vagrant 00:05:31.114 HostName 192.168.121.48 00:05:31.114 User vagrant 00:05:31.114 Port 22 00:05:31.114 UserKnownHostsFile /dev/null 00:05:31.114 StrictHostKeyChecking no 00:05:31.114 PasswordAuthentication no 00:05:31.114 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:05:31.114 IdentitiesOnly yes 00:05:31.114 LogLevel FATAL 00:05:31.114 ForwardAgent yes 00:05:31.114 ForwardX11 yes 00:05:31.114 00:05:31.126 [Pipeline] withEnv 00:05:31.129 [Pipeline] { 00:05:31.142 [Pipeline] sh 00:05:31.418 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:05:31.418 source /etc/os-release 00:05:31.419 [[ -e /image.version ]] && img=$(< /image.version) 00:05:31.419 # Minimal, systemd-like check. 00:05:31.419 if [[ -e /.dockerenv ]]; then 00:05:31.419 # Clear garbage from the node'\''s name: 00:05:31.419 # agt-er_autotest_547-896 -> autotest_547-896 00:05:31.419 # $HOSTNAME is the actual container id 00:05:31.419 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:05:31.419 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:05:31.419 # We can assume this is a mount from a host where container is running, 00:05:31.419 # so fetch its hostname to easily identify the target swarm worker. 00:05:31.419 container="$(< /etc/hostname) ($agent)" 00:05:31.419 else 00:05:31.419 # Fallback 00:05:31.419 container=$agent 00:05:31.419 fi 00:05:31.419 fi 00:05:31.419 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:05:31.419 ' 00:05:31.686 [Pipeline] } 00:05:31.703 [Pipeline] // withEnv 00:05:31.713 [Pipeline] setCustomBuildProperty 00:05:31.729 [Pipeline] stage 00:05:31.731 [Pipeline] { (Tests) 00:05:31.749 [Pipeline] sh 00:05:32.026 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:05:32.295 [Pipeline] sh 00:05:32.571 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:05:32.840 [Pipeline] timeout 00:05:32.841 Timeout set to expire in 50 min 00:05:32.843 [Pipeline] { 00:05:32.856 [Pipeline] sh 00:05:33.132 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:05:33.698 HEAD is now at 0b1b15acc lib/reduce: Support storing metadata on backing dev. (5 of 5, test cases) 00:05:33.709 [Pipeline] sh 00:05:33.985 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:05:34.256 [Pipeline] sh 00:05:34.535 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:05:34.840 [Pipeline] sh 00:05:35.119 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:05:35.377 ++ readlink -f spdk_repo 00:05:35.377 + DIR_ROOT=/home/vagrant/spdk_repo 00:05:35.377 + [[ -n /home/vagrant/spdk_repo ]] 00:05:35.377 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:05:35.377 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:05:35.377 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:05:35.377 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:05:35.377 + [[ -d /home/vagrant/spdk_repo/output ]] 00:05:35.377 + [[ nvme-vg-autotest == pkgdep-* ]] 00:05:35.377 + cd /home/vagrant/spdk_repo 00:05:35.377 + source /etc/os-release 00:05:35.377 ++ NAME='Fedora Linux' 00:05:35.377 ++ VERSION='39 (Cloud Edition)' 00:05:35.377 ++ ID=fedora 00:05:35.377 ++ VERSION_ID=39 00:05:35.377 ++ VERSION_CODENAME= 00:05:35.377 ++ PLATFORM_ID=platform:f39 00:05:35.377 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:05:35.377 ++ ANSI_COLOR='0;38;2;60;110;180' 00:05:35.377 ++ LOGO=fedora-logo-icon 00:05:35.377 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:05:35.377 ++ HOME_URL=https://fedoraproject.org/ 00:05:35.377 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:05:35.377 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:05:35.377 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:05:35.377 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:05:35.377 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:05:35.377 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:05:35.377 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:05:35.377 ++ SUPPORT_END=2024-11-12 00:05:35.377 ++ VARIANT='Cloud Edition' 00:05:35.377 ++ VARIANT_ID=cloud 00:05:35.377 + uname -a 00:05:35.377 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:05:35.377 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:35.635 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:35.892 Hugepages 00:05:35.892 node hugesize free / total 00:05:35.892 node0 1048576kB 0 / 0 00:05:35.892 node0 2048kB 0 / 0 00:05:35.892 00:05:35.892 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:35.892 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:35.892 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:35.892 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:35.892 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:05:35.892 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:35.892 + rm -f /tmp/spdk-ld-path 00:05:35.892 + source autorun-spdk.conf 00:05:35.892 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:35.892 ++ SPDK_TEST_NVME=1 00:05:35.892 ++ SPDK_TEST_FTL=1 00:05:35.892 ++ SPDK_TEST_ISAL=1 00:05:35.892 ++ SPDK_RUN_ASAN=1 00:05:35.892 ++ SPDK_RUN_UBSAN=1 00:05:35.892 ++ SPDK_TEST_XNVME=1 00:05:35.892 ++ SPDK_TEST_NVME_FDP=1 00:05:35.892 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:35.892 ++ RUN_NIGHTLY=0 00:05:35.892 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:05:35.892 + [[ -n '' ]] 00:05:35.892 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:05:35.892 + for M in /var/spdk/build-*-manifest.txt 00:05:35.892 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:05:35.892 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:05:35.892 + for M in /var/spdk/build-*-manifest.txt 00:05:35.892 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:05:35.892 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:05:35.892 + for M in /var/spdk/build-*-manifest.txt 00:05:35.892 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:05:35.892 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:05:35.892 ++ uname 00:05:35.892 + [[ Linux == \L\i\n\u\x ]] 00:05:35.892 + sudo dmesg -T 00:05:36.150 + sudo dmesg --clear 00:05:36.150 + dmesg_pid=5025 00:05:36.150 + [[ Fedora Linux == FreeBSD ]] 00:05:36.150 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:36.150 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:36.150 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:05:36.150 + [[ -x /usr/src/fio-static/fio ]] 00:05:36.150 + sudo dmesg -Tw 00:05:36.150 + export FIO_BIN=/usr/src/fio-static/fio 00:05:36.150 + FIO_BIN=/usr/src/fio-static/fio 00:05:36.150 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:05:36.150 + [[ ! -v VFIO_QEMU_BIN ]] 00:05:36.150 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:05:36.150 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:36.150 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:36.151 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:05:36.151 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:36.151 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:36.151 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:36.151 06:31:48 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:05:36.151 06:31:48 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:36.151 06:31:48 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:36.151 06:31:48 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:05:36.151 06:31:48 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:05:36.151 06:31:48 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:05:36.151 06:31:48 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:05:36.151 06:31:48 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:05:36.151 06:31:48 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:05:36.151 06:31:48 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:05:36.151 06:31:48 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:36.151 06:31:48 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:05:36.151 06:31:48 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:05:36.151 06:31:48 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:36.151 06:31:48 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:05:36.151 06:31:48 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:36.151 06:31:48 -- scripts/common.sh@15 -- $ shopt -s extglob 00:05:36.151 06:31:48 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:05:36.151 06:31:48 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:36.151 06:31:48 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:36.151 06:31:48 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.151 06:31:48 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.151 06:31:48 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.151 06:31:48 -- paths/export.sh@5 -- $ export PATH 00:05:36.151 06:31:48 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.151 06:31:48 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:05:36.151 06:31:48 -- common/autobuild_common.sh@493 -- $ date +%s 00:05:36.151 06:31:48 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733466708.XXXXXX 00:05:36.151 06:31:48 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733466708.m565uU 00:05:36.151 06:31:48 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:05:36.151 06:31:48 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:05:36.151 06:31:48 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:05:36.151 06:31:48 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:05:36.151 06:31:48 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:05:36.151 06:31:48 -- common/autobuild_common.sh@509 -- $ get_config_params 00:05:36.151 06:31:48 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:05:36.151 06:31:48 -- common/autotest_common.sh@10 -- $ set +x 00:05:36.151 06:31:48 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:05:36.151 06:31:48 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:05:36.151 06:31:48 -- pm/common@17 -- $ local monitor 00:05:36.151 06:31:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:36.151 06:31:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:36.151 06:31:48 -- pm/common@25 -- $ sleep 1 00:05:36.151 06:31:48 -- pm/common@21 -- $ date +%s 00:05:36.151 06:31:48 -- pm/common@21 -- $ date +%s 00:05:36.151 06:31:48 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733466708 00:05:36.151 06:31:48 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733466708 00:05:36.151 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733466708_collect-vmstat.pm.log 00:05:36.151 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733466708_collect-cpu-load.pm.log 00:05:37.524 06:31:49 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:05:37.524 06:31:49 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:05:37.524 06:31:49 -- spdk/autobuild.sh@12 -- $ umask 022 00:05:37.524 06:31:49 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:05:37.524 06:31:49 -- spdk/autobuild.sh@16 -- $ date -u 00:05:37.524 Fri Dec 6 06:31:49 AM UTC 2024 00:05:37.524 06:31:49 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:05:37.524 v25.01-pre-308-g0b1b15acc 00:05:37.524 06:31:49 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:05:37.524 06:31:49 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:05:37.524 06:31:49 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:05:37.524 06:31:49 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:05:37.524 06:31:49 -- common/autotest_common.sh@10 -- $ set +x 00:05:37.524 ************************************ 00:05:37.524 START TEST asan 00:05:37.524 ************************************ 00:05:37.524 using asan 00:05:37.524 06:31:49 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:05:37.524 00:05:37.524 real 0m0.000s 00:05:37.524 user 0m0.000s 00:05:37.524 sys 0m0.000s 00:05:37.524 06:31:49 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:37.524 ************************************ 00:05:37.524 END TEST asan 00:05:37.524 06:31:49 asan -- common/autotest_common.sh@10 -- $ set +x 00:05:37.524 ************************************ 00:05:37.524 06:31:49 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:05:37.524 06:31:49 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:05:37.524 06:31:49 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:05:37.524 06:31:49 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:05:37.524 06:31:49 -- common/autotest_common.sh@10 -- $ set +x 00:05:37.524 ************************************ 00:05:37.524 START TEST ubsan 00:05:37.524 ************************************ 00:05:37.524 using ubsan 00:05:37.524 06:31:49 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:05:37.524 00:05:37.524 real 0m0.000s 00:05:37.524 user 0m0.000s 00:05:37.524 sys 0m0.000s 00:05:37.524 06:31:49 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:37.524 ************************************ 00:05:37.524 END TEST ubsan 00:05:37.524 ************************************ 00:05:37.524 06:31:49 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:05:37.524 06:31:49 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:05:37.524 06:31:49 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:05:37.524 06:31:49 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:05:37.524 06:31:49 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:05:37.524 06:31:49 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:05:37.524 06:31:49 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:05:37.524 06:31:49 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:05:37.524 06:31:49 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:05:37.524 06:31:49 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:05:37.524 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:37.524 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:05:37.781 Using 'verbs' RDMA provider 00:05:48.675 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:06:00.883 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:06:00.883 Creating mk/config.mk...done. 00:06:00.883 Creating mk/cc.flags.mk...done. 00:06:00.883 Type 'make' to build. 00:06:00.883 06:32:12 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:06:00.883 06:32:12 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:06:00.883 06:32:12 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:06:00.883 06:32:12 -- common/autotest_common.sh@10 -- $ set +x 00:06:00.883 ************************************ 00:06:00.883 START TEST make 00:06:00.883 ************************************ 00:06:00.883 06:32:12 make -- common/autotest_common.sh@1129 -- $ make -j10 00:06:00.883 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:06:00.883 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:06:00.883 meson setup builddir \ 00:06:00.883 -Dwith-libaio=enabled \ 00:06:00.883 -Dwith-liburing=enabled \ 00:06:00.883 -Dwith-libvfn=disabled \ 00:06:00.883 -Dwith-spdk=disabled \ 00:06:00.883 -Dexamples=false \ 00:06:00.883 -Dtests=false \ 00:06:00.883 -Dtools=false && \ 00:06:00.883 meson compile -C builddir && \ 00:06:00.883 cd -) 00:06:00.883 make[1]: Nothing to be done for 'all'. 00:06:01.816 The Meson build system 00:06:01.816 Version: 1.5.0 00:06:01.816 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:06:01.816 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:06:01.816 Build type: native build 00:06:01.816 Project name: xnvme 00:06:01.816 Project version: 0.7.5 00:06:01.816 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:06:01.816 C linker for the host machine: cc ld.bfd 2.40-14 00:06:01.816 Host machine cpu family: x86_64 00:06:01.816 Host machine cpu: x86_64 00:06:01.816 Message: host_machine.system: linux 00:06:01.816 Compiler for C supports arguments -Wno-missing-braces: YES 00:06:01.817 Compiler for C supports arguments -Wno-cast-function-type: YES 00:06:01.817 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:06:01.817 Run-time dependency threads found: YES 00:06:01.817 Has header "setupapi.h" : NO 00:06:01.817 Has header "linux/blkzoned.h" : YES 00:06:01.817 Has header "linux/blkzoned.h" : YES (cached) 00:06:01.817 Has header "libaio.h" : YES 00:06:01.817 Library aio found: YES 00:06:01.817 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:06:01.817 Run-time dependency liburing found: YES 2.2 00:06:01.817 Dependency libvfn skipped: feature with-libvfn disabled 00:06:01.817 Found CMake: /usr/bin/cmake (3.27.7) 00:06:01.817 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:06:01.817 Subproject spdk : skipped: feature with-spdk disabled 00:06:01.817 Run-time dependency appleframeworks found: NO (tried framework) 00:06:01.817 Run-time dependency appleframeworks found: NO (tried framework) 00:06:01.817 Library rt found: YES 00:06:01.817 Checking for function "clock_gettime" with dependency -lrt: YES 00:06:01.817 Configuring xnvme_config.h using configuration 00:06:01.817 Configuring xnvme.spec using configuration 00:06:01.817 Run-time dependency bash-completion found: YES 2.11 00:06:01.817 Message: Bash-completions: /usr/share/bash-completion/completions 00:06:01.817 Program cp found: YES (/usr/bin/cp) 00:06:01.817 Build targets in project: 3 00:06:01.817 00:06:01.817 xnvme 0.7.5 00:06:01.817 00:06:01.817 Subprojects 00:06:01.817 spdk : NO Feature 'with-spdk' disabled 00:06:01.817 00:06:01.817 User defined options 00:06:01.817 examples : false 00:06:01.817 tests : false 00:06:01.817 tools : false 00:06:01.817 with-libaio : enabled 00:06:01.817 with-liburing: enabled 00:06:01.817 with-libvfn : disabled 00:06:01.817 with-spdk : disabled 00:06:01.817 00:06:01.817 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:06:02.381 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:06:02.381 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:06:02.381 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:06:02.381 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:06:02.381 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:06:02.381 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:06:02.381 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:06:02.381 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:06:02.381 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:06:02.381 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:06:02.381 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:06:02.381 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:06:02.381 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:06:02.381 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:06:02.637 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:06:02.637 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:06:02.637 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:06:02.637 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:06:02.637 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:06:02.637 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:06:02.637 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:06:02.637 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:06:02.637 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:06:02.637 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:06:02.637 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:06:02.637 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:06:02.637 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:06:02.637 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:06:02.637 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:06:02.637 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:06:02.637 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:06:02.637 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:06:02.637 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:06:02.637 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:06:02.637 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:06:02.637 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:06:02.637 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:06:02.637 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:06:02.637 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:06:02.637 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:06:02.637 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:06:02.637 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:06:02.637 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:06:02.637 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:06:02.637 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:06:02.893 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:06:02.893 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:06:02.893 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:06:02.893 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:06:02.893 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:06:02.893 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:06:02.893 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:06:02.893 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:06:02.893 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:06:02.893 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:06:02.893 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:06:02.893 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:06:02.893 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:06:02.893 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:06:02.893 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:06:02.893 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:06:02.893 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:06:02.893 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:06:02.893 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:06:02.893 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:06:02.893 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:06:03.149 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:06:03.149 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:06:03.149 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:06:03.149 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:06:03.149 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:06:03.149 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:06:03.149 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:06:03.149 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:06:03.406 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:06:03.406 [75/76] Linking static target lib/libxnvme.a 00:06:03.406 [76/76] Linking target lib/libxnvme.so.0.7.5 00:06:03.406 INFO: autodetecting backend as ninja 00:06:03.406 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:06:03.663 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:06:11.776 The Meson build system 00:06:11.776 Version: 1.5.0 00:06:11.776 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:06:11.776 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:06:11.776 Build type: native build 00:06:11.776 Program cat found: YES (/usr/bin/cat) 00:06:11.776 Project name: DPDK 00:06:11.776 Project version: 24.03.0 00:06:11.776 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:06:11.776 C linker for the host machine: cc ld.bfd 2.40-14 00:06:11.776 Host machine cpu family: x86_64 00:06:11.776 Host machine cpu: x86_64 00:06:11.776 Message: ## Building in Developer Mode ## 00:06:11.776 Program pkg-config found: YES (/usr/bin/pkg-config) 00:06:11.776 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:06:11.776 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:06:11.776 Program python3 found: YES (/usr/bin/python3) 00:06:11.776 Program cat found: YES (/usr/bin/cat) 00:06:11.776 Compiler for C supports arguments -march=native: YES 00:06:11.776 Checking for size of "void *" : 8 00:06:11.776 Checking for size of "void *" : 8 (cached) 00:06:11.776 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:06:11.776 Library m found: YES 00:06:11.777 Library numa found: YES 00:06:11.777 Has header "numaif.h" : YES 00:06:11.777 Library fdt found: NO 00:06:11.777 Library execinfo found: NO 00:06:11.777 Has header "execinfo.h" : YES 00:06:11.777 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:06:11.777 Run-time dependency libarchive found: NO (tried pkgconfig) 00:06:11.777 Run-time dependency libbsd found: NO (tried pkgconfig) 00:06:11.777 Run-time dependency jansson found: NO (tried pkgconfig) 00:06:11.777 Run-time dependency openssl found: YES 3.1.1 00:06:11.777 Run-time dependency libpcap found: YES 1.10.4 00:06:11.777 Has header "pcap.h" with dependency libpcap: YES 00:06:11.777 Compiler for C supports arguments -Wcast-qual: YES 00:06:11.777 Compiler for C supports arguments -Wdeprecated: YES 00:06:11.777 Compiler for C supports arguments -Wformat: YES 00:06:11.777 Compiler for C supports arguments -Wformat-nonliteral: NO 00:06:11.777 Compiler for C supports arguments -Wformat-security: NO 00:06:11.777 Compiler for C supports arguments -Wmissing-declarations: YES 00:06:11.777 Compiler for C supports arguments -Wmissing-prototypes: YES 00:06:11.777 Compiler for C supports arguments -Wnested-externs: YES 00:06:11.777 Compiler for C supports arguments -Wold-style-definition: YES 00:06:11.777 Compiler for C supports arguments -Wpointer-arith: YES 00:06:11.777 Compiler for C supports arguments -Wsign-compare: YES 00:06:11.777 Compiler for C supports arguments -Wstrict-prototypes: YES 00:06:11.777 Compiler for C supports arguments -Wundef: YES 00:06:11.777 Compiler for C supports arguments -Wwrite-strings: YES 00:06:11.777 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:06:11.777 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:06:11.777 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:06:11.777 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:06:11.777 Program objdump found: YES (/usr/bin/objdump) 00:06:11.777 Compiler for C supports arguments -mavx512f: YES 00:06:11.777 Checking if "AVX512 checking" compiles: YES 00:06:11.777 Fetching value of define "__SSE4_2__" : 1 00:06:11.777 Fetching value of define "__AES__" : 1 00:06:11.777 Fetching value of define "__AVX__" : 1 00:06:11.777 Fetching value of define "__AVX2__" : 1 00:06:11.777 Fetching value of define "__AVX512BW__" : 1 00:06:11.777 Fetching value of define "__AVX512CD__" : 1 00:06:11.777 Fetching value of define "__AVX512DQ__" : 1 00:06:11.777 Fetching value of define "__AVX512F__" : 1 00:06:11.777 Fetching value of define "__AVX512VL__" : 1 00:06:11.777 Fetching value of define "__PCLMUL__" : 1 00:06:11.777 Fetching value of define "__RDRND__" : 1 00:06:11.777 Fetching value of define "__RDSEED__" : 1 00:06:11.777 Fetching value of define "__VPCLMULQDQ__" : 1 00:06:11.777 Fetching value of define "__znver1__" : (undefined) 00:06:11.777 Fetching value of define "__znver2__" : (undefined) 00:06:11.777 Fetching value of define "__znver3__" : (undefined) 00:06:11.777 Fetching value of define "__znver4__" : (undefined) 00:06:11.777 Library asan found: YES 00:06:11.777 Compiler for C supports arguments -Wno-format-truncation: YES 00:06:11.777 Message: lib/log: Defining dependency "log" 00:06:11.777 Message: lib/kvargs: Defining dependency "kvargs" 00:06:11.777 Message: lib/telemetry: Defining dependency "telemetry" 00:06:11.777 Library rt found: YES 00:06:11.777 Checking for function "getentropy" : NO 00:06:11.777 Message: lib/eal: Defining dependency "eal" 00:06:11.777 Message: lib/ring: Defining dependency "ring" 00:06:11.777 Message: lib/rcu: Defining dependency "rcu" 00:06:11.777 Message: lib/mempool: Defining dependency "mempool" 00:06:11.777 Message: lib/mbuf: Defining dependency "mbuf" 00:06:11.777 Fetching value of define "__PCLMUL__" : 1 (cached) 00:06:11.777 Fetching value of define "__AVX512F__" : 1 (cached) 00:06:11.777 Fetching value of define "__AVX512BW__" : 1 (cached) 00:06:11.777 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:06:11.777 Fetching value of define "__AVX512VL__" : 1 (cached) 00:06:11.777 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:06:11.777 Compiler for C supports arguments -mpclmul: YES 00:06:11.777 Compiler for C supports arguments -maes: YES 00:06:11.777 Compiler for C supports arguments -mavx512f: YES (cached) 00:06:11.777 Compiler for C supports arguments -mavx512bw: YES 00:06:11.777 Compiler for C supports arguments -mavx512dq: YES 00:06:11.777 Compiler for C supports arguments -mavx512vl: YES 00:06:11.777 Compiler for C supports arguments -mvpclmulqdq: YES 00:06:11.777 Compiler for C supports arguments -mavx2: YES 00:06:11.777 Compiler for C supports arguments -mavx: YES 00:06:11.777 Message: lib/net: Defining dependency "net" 00:06:11.777 Message: lib/meter: Defining dependency "meter" 00:06:11.777 Message: lib/ethdev: Defining dependency "ethdev" 00:06:11.777 Message: lib/pci: Defining dependency "pci" 00:06:11.777 Message: lib/cmdline: Defining dependency "cmdline" 00:06:11.777 Message: lib/hash: Defining dependency "hash" 00:06:11.777 Message: lib/timer: Defining dependency "timer" 00:06:11.777 Message: lib/compressdev: Defining dependency "compressdev" 00:06:11.777 Message: lib/cryptodev: Defining dependency "cryptodev" 00:06:11.777 Message: lib/dmadev: Defining dependency "dmadev" 00:06:11.777 Compiler for C supports arguments -Wno-cast-qual: YES 00:06:11.777 Message: lib/power: Defining dependency "power" 00:06:11.777 Message: lib/reorder: Defining dependency "reorder" 00:06:11.777 Message: lib/security: Defining dependency "security" 00:06:11.777 Has header "linux/userfaultfd.h" : YES 00:06:11.777 Has header "linux/vduse.h" : YES 00:06:11.777 Message: lib/vhost: Defining dependency "vhost" 00:06:11.777 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:06:11.777 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:06:11.777 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:06:11.777 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:06:11.777 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:06:11.777 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:06:11.777 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:06:11.777 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:06:11.777 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:06:11.777 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:06:11.777 Program doxygen found: YES (/usr/local/bin/doxygen) 00:06:11.777 Configuring doxy-api-html.conf using configuration 00:06:11.777 Configuring doxy-api-man.conf using configuration 00:06:11.777 Program mandb found: YES (/usr/bin/mandb) 00:06:11.777 Program sphinx-build found: NO 00:06:11.777 Configuring rte_build_config.h using configuration 00:06:11.777 Message: 00:06:11.777 ================= 00:06:11.777 Applications Enabled 00:06:11.777 ================= 00:06:11.777 00:06:11.777 apps: 00:06:11.777 00:06:11.777 00:06:11.777 Message: 00:06:11.777 ================= 00:06:11.777 Libraries Enabled 00:06:11.777 ================= 00:06:11.777 00:06:11.777 libs: 00:06:11.777 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:06:11.777 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:06:11.777 cryptodev, dmadev, power, reorder, security, vhost, 00:06:11.777 00:06:11.777 Message: 00:06:11.777 =============== 00:06:11.777 Drivers Enabled 00:06:11.777 =============== 00:06:11.777 00:06:11.777 common: 00:06:11.777 00:06:11.777 bus: 00:06:11.777 pci, vdev, 00:06:11.777 mempool: 00:06:11.777 ring, 00:06:11.777 dma: 00:06:11.777 00:06:11.777 net: 00:06:11.777 00:06:11.777 crypto: 00:06:11.777 00:06:11.777 compress: 00:06:11.777 00:06:11.777 vdpa: 00:06:11.777 00:06:11.777 00:06:11.777 Message: 00:06:11.777 ================= 00:06:11.777 Content Skipped 00:06:11.777 ================= 00:06:11.777 00:06:11.777 apps: 00:06:11.777 dumpcap: explicitly disabled via build config 00:06:11.777 graph: explicitly disabled via build config 00:06:11.777 pdump: explicitly disabled via build config 00:06:11.777 proc-info: explicitly disabled via build config 00:06:11.777 test-acl: explicitly disabled via build config 00:06:11.777 test-bbdev: explicitly disabled via build config 00:06:11.777 test-cmdline: explicitly disabled via build config 00:06:11.777 test-compress-perf: explicitly disabled via build config 00:06:11.777 test-crypto-perf: explicitly disabled via build config 00:06:11.777 test-dma-perf: explicitly disabled via build config 00:06:11.777 test-eventdev: explicitly disabled via build config 00:06:11.777 test-fib: explicitly disabled via build config 00:06:11.777 test-flow-perf: explicitly disabled via build config 00:06:11.777 test-gpudev: explicitly disabled via build config 00:06:11.777 test-mldev: explicitly disabled via build config 00:06:11.777 test-pipeline: explicitly disabled via build config 00:06:11.777 test-pmd: explicitly disabled via build config 00:06:11.777 test-regex: explicitly disabled via build config 00:06:11.777 test-sad: explicitly disabled via build config 00:06:11.778 test-security-perf: explicitly disabled via build config 00:06:11.778 00:06:11.778 libs: 00:06:11.778 argparse: explicitly disabled via build config 00:06:11.778 metrics: explicitly disabled via build config 00:06:11.778 acl: explicitly disabled via build config 00:06:11.778 bbdev: explicitly disabled via build config 00:06:11.778 bitratestats: explicitly disabled via build config 00:06:11.778 bpf: explicitly disabled via build config 00:06:11.778 cfgfile: explicitly disabled via build config 00:06:11.778 distributor: explicitly disabled via build config 00:06:11.778 efd: explicitly disabled via build config 00:06:11.778 eventdev: explicitly disabled via build config 00:06:11.778 dispatcher: explicitly disabled via build config 00:06:11.778 gpudev: explicitly disabled via build config 00:06:11.778 gro: explicitly disabled via build config 00:06:11.778 gso: explicitly disabled via build config 00:06:11.778 ip_frag: explicitly disabled via build config 00:06:11.778 jobstats: explicitly disabled via build config 00:06:11.778 latencystats: explicitly disabled via build config 00:06:11.778 lpm: explicitly disabled via build config 00:06:11.778 member: explicitly disabled via build config 00:06:11.778 pcapng: explicitly disabled via build config 00:06:11.778 rawdev: explicitly disabled via build config 00:06:11.778 regexdev: explicitly disabled via build config 00:06:11.778 mldev: explicitly disabled via build config 00:06:11.778 rib: explicitly disabled via build config 00:06:11.778 sched: explicitly disabled via build config 00:06:11.778 stack: explicitly disabled via build config 00:06:11.778 ipsec: explicitly disabled via build config 00:06:11.778 pdcp: explicitly disabled via build config 00:06:11.778 fib: explicitly disabled via build config 00:06:11.778 port: explicitly disabled via build config 00:06:11.778 pdump: explicitly disabled via build config 00:06:11.778 table: explicitly disabled via build config 00:06:11.778 pipeline: explicitly disabled via build config 00:06:11.778 graph: explicitly disabled via build config 00:06:11.778 node: explicitly disabled via build config 00:06:11.778 00:06:11.778 drivers: 00:06:11.778 common/cpt: not in enabled drivers build config 00:06:11.778 common/dpaax: not in enabled drivers build config 00:06:11.778 common/iavf: not in enabled drivers build config 00:06:11.778 common/idpf: not in enabled drivers build config 00:06:11.778 common/ionic: not in enabled drivers build config 00:06:11.778 common/mvep: not in enabled drivers build config 00:06:11.778 common/octeontx: not in enabled drivers build config 00:06:11.778 bus/auxiliary: not in enabled drivers build config 00:06:11.778 bus/cdx: not in enabled drivers build config 00:06:11.778 bus/dpaa: not in enabled drivers build config 00:06:11.778 bus/fslmc: not in enabled drivers build config 00:06:11.778 bus/ifpga: not in enabled drivers build config 00:06:11.778 bus/platform: not in enabled drivers build config 00:06:11.778 bus/uacce: not in enabled drivers build config 00:06:11.778 bus/vmbus: not in enabled drivers build config 00:06:11.778 common/cnxk: not in enabled drivers build config 00:06:11.778 common/mlx5: not in enabled drivers build config 00:06:11.778 common/nfp: not in enabled drivers build config 00:06:11.778 common/nitrox: not in enabled drivers build config 00:06:11.778 common/qat: not in enabled drivers build config 00:06:11.778 common/sfc_efx: not in enabled drivers build config 00:06:11.778 mempool/bucket: not in enabled drivers build config 00:06:11.778 mempool/cnxk: not in enabled drivers build config 00:06:11.778 mempool/dpaa: not in enabled drivers build config 00:06:11.778 mempool/dpaa2: not in enabled drivers build config 00:06:11.778 mempool/octeontx: not in enabled drivers build config 00:06:11.778 mempool/stack: not in enabled drivers build config 00:06:11.778 dma/cnxk: not in enabled drivers build config 00:06:11.778 dma/dpaa: not in enabled drivers build config 00:06:11.778 dma/dpaa2: not in enabled drivers build config 00:06:11.778 dma/hisilicon: not in enabled drivers build config 00:06:11.778 dma/idxd: not in enabled drivers build config 00:06:11.778 dma/ioat: not in enabled drivers build config 00:06:11.778 dma/skeleton: not in enabled drivers build config 00:06:11.778 net/af_packet: not in enabled drivers build config 00:06:11.778 net/af_xdp: not in enabled drivers build config 00:06:11.778 net/ark: not in enabled drivers build config 00:06:11.778 net/atlantic: not in enabled drivers build config 00:06:11.778 net/avp: not in enabled drivers build config 00:06:11.778 net/axgbe: not in enabled drivers build config 00:06:11.778 net/bnx2x: not in enabled drivers build config 00:06:11.778 net/bnxt: not in enabled drivers build config 00:06:11.778 net/bonding: not in enabled drivers build config 00:06:11.778 net/cnxk: not in enabled drivers build config 00:06:11.778 net/cpfl: not in enabled drivers build config 00:06:11.778 net/cxgbe: not in enabled drivers build config 00:06:11.778 net/dpaa: not in enabled drivers build config 00:06:11.778 net/dpaa2: not in enabled drivers build config 00:06:11.778 net/e1000: not in enabled drivers build config 00:06:11.778 net/ena: not in enabled drivers build config 00:06:11.778 net/enetc: not in enabled drivers build config 00:06:11.778 net/enetfec: not in enabled drivers build config 00:06:11.778 net/enic: not in enabled drivers build config 00:06:11.778 net/failsafe: not in enabled drivers build config 00:06:11.778 net/fm10k: not in enabled drivers build config 00:06:11.778 net/gve: not in enabled drivers build config 00:06:11.778 net/hinic: not in enabled drivers build config 00:06:11.778 net/hns3: not in enabled drivers build config 00:06:11.778 net/i40e: not in enabled drivers build config 00:06:11.778 net/iavf: not in enabled drivers build config 00:06:11.778 net/ice: not in enabled drivers build config 00:06:11.778 net/idpf: not in enabled drivers build config 00:06:11.778 net/igc: not in enabled drivers build config 00:06:11.778 net/ionic: not in enabled drivers build config 00:06:11.778 net/ipn3ke: not in enabled drivers build config 00:06:11.778 net/ixgbe: not in enabled drivers build config 00:06:11.778 net/mana: not in enabled drivers build config 00:06:11.778 net/memif: not in enabled drivers build config 00:06:11.778 net/mlx4: not in enabled drivers build config 00:06:11.778 net/mlx5: not in enabled drivers build config 00:06:11.778 net/mvneta: not in enabled drivers build config 00:06:11.778 net/mvpp2: not in enabled drivers build config 00:06:11.778 net/netvsc: not in enabled drivers build config 00:06:11.778 net/nfb: not in enabled drivers build config 00:06:11.778 net/nfp: not in enabled drivers build config 00:06:11.778 net/ngbe: not in enabled drivers build config 00:06:11.778 net/null: not in enabled drivers build config 00:06:11.778 net/octeontx: not in enabled drivers build config 00:06:11.778 net/octeon_ep: not in enabled drivers build config 00:06:11.778 net/pcap: not in enabled drivers build config 00:06:11.778 net/pfe: not in enabled drivers build config 00:06:11.778 net/qede: not in enabled drivers build config 00:06:11.778 net/ring: not in enabled drivers build config 00:06:11.778 net/sfc: not in enabled drivers build config 00:06:11.778 net/softnic: not in enabled drivers build config 00:06:11.778 net/tap: not in enabled drivers build config 00:06:11.778 net/thunderx: not in enabled drivers build config 00:06:11.778 net/txgbe: not in enabled drivers build config 00:06:11.778 net/vdev_netvsc: not in enabled drivers build config 00:06:11.778 net/vhost: not in enabled drivers build config 00:06:11.778 net/virtio: not in enabled drivers build config 00:06:11.778 net/vmxnet3: not in enabled drivers build config 00:06:11.778 raw/*: missing internal dependency, "rawdev" 00:06:11.778 crypto/armv8: not in enabled drivers build config 00:06:11.778 crypto/bcmfs: not in enabled drivers build config 00:06:11.778 crypto/caam_jr: not in enabled drivers build config 00:06:11.778 crypto/ccp: not in enabled drivers build config 00:06:11.778 crypto/cnxk: not in enabled drivers build config 00:06:11.778 crypto/dpaa_sec: not in enabled drivers build config 00:06:11.778 crypto/dpaa2_sec: not in enabled drivers build config 00:06:11.778 crypto/ipsec_mb: not in enabled drivers build config 00:06:11.778 crypto/mlx5: not in enabled drivers build config 00:06:11.778 crypto/mvsam: not in enabled drivers build config 00:06:11.778 crypto/nitrox: not in enabled drivers build config 00:06:11.778 crypto/null: not in enabled drivers build config 00:06:11.778 crypto/octeontx: not in enabled drivers build config 00:06:11.778 crypto/openssl: not in enabled drivers build config 00:06:11.778 crypto/scheduler: not in enabled drivers build config 00:06:11.778 crypto/uadk: not in enabled drivers build config 00:06:11.778 crypto/virtio: not in enabled drivers build config 00:06:11.778 compress/isal: not in enabled drivers build config 00:06:11.778 compress/mlx5: not in enabled drivers build config 00:06:11.778 compress/nitrox: not in enabled drivers build config 00:06:11.778 compress/octeontx: not in enabled drivers build config 00:06:11.778 compress/zlib: not in enabled drivers build config 00:06:11.778 regex/*: missing internal dependency, "regexdev" 00:06:11.778 ml/*: missing internal dependency, "mldev" 00:06:11.778 vdpa/ifc: not in enabled drivers build config 00:06:11.778 vdpa/mlx5: not in enabled drivers build config 00:06:11.779 vdpa/nfp: not in enabled drivers build config 00:06:11.779 vdpa/sfc: not in enabled drivers build config 00:06:11.779 event/*: missing internal dependency, "eventdev" 00:06:11.779 baseband/*: missing internal dependency, "bbdev" 00:06:11.779 gpu/*: missing internal dependency, "gpudev" 00:06:11.779 00:06:11.779 00:06:11.779 Build targets in project: 84 00:06:11.779 00:06:11.779 DPDK 24.03.0 00:06:11.779 00:06:11.779 User defined options 00:06:11.779 buildtype : debug 00:06:11.779 default_library : shared 00:06:11.779 libdir : lib 00:06:11.779 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:06:11.779 b_sanitize : address 00:06:11.779 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:06:11.779 c_link_args : 00:06:11.779 cpu_instruction_set: native 00:06:11.779 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:06:11.779 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:06:11.779 enable_docs : false 00:06:11.779 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:06:11.779 enable_kmods : false 00:06:11.779 max_lcores : 128 00:06:11.779 tests : false 00:06:11.779 00:06:11.779 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:06:11.779 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:06:11.779 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:06:11.779 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:06:11.779 [3/267] Linking static target lib/librte_kvargs.a 00:06:11.779 [4/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:06:11.779 [5/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:06:11.779 [6/267] Linking static target lib/librte_log.a 00:06:12.037 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:06:12.037 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:06:12.296 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:06:12.296 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:06:12.296 [11/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:06:12.296 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:06:12.296 [13/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:06:12.296 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:06:12.296 [15/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:06:12.296 [16/267] Linking static target lib/librte_telemetry.a 00:06:12.296 [17/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:06:12.296 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:06:12.554 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:06:12.811 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:06:12.811 [21/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:06:12.811 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:06:12.811 [23/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:06:12.811 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:06:12.811 [25/267] Linking target lib/librte_log.so.24.1 00:06:12.811 [26/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:06:12.811 [27/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:06:13.068 [28/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:06:13.068 [29/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:06:13.068 [30/267] Linking target lib/librte_kvargs.so.24.1 00:06:13.068 [31/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:06:13.068 [32/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:06:13.068 [33/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:06:13.068 [34/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:06:13.068 [35/267] Linking target lib/librte_telemetry.so.24.1 00:06:13.325 [36/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:06:13.325 [37/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:06:13.325 [38/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:06:13.325 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:06:13.325 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:06:13.325 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:06:13.325 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:06:13.325 [43/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:06:13.325 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:06:13.583 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:06:13.583 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:06:13.583 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:06:13.583 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:06:13.841 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:06:13.841 [50/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:06:13.841 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:06:13.841 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:06:13.841 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:06:14.099 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:06:14.099 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:06:14.099 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:06:14.099 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:06:14.099 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:06:14.357 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:06:14.357 [60/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:06:14.357 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:06:14.357 [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:06:14.357 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:06:14.357 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:06:14.616 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:06:14.616 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:06:14.616 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:06:14.616 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:06:14.616 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:06:14.927 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:06:14.927 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:06:14.927 [72/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:06:14.927 [73/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:06:14.927 [74/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:06:15.186 [75/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:06:15.186 [76/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:06:15.186 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:06:15.186 [78/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:06:15.186 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:06:15.186 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:06:15.186 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:06:15.444 [82/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:06:15.444 [83/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:06:15.444 [84/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:06:15.444 [85/267] Linking static target lib/librte_eal.a 00:06:15.444 [86/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:06:15.702 [87/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:06:15.702 [88/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:06:15.702 [89/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:06:15.702 [90/267] Linking static target lib/librte_ring.a 00:06:15.702 [91/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:06:15.702 [92/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:06:15.702 [93/267] Linking static target lib/librte_mempool.a 00:06:15.702 [94/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:06:15.960 [95/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:06:15.960 [96/267] Linking static target lib/librte_rcu.a 00:06:15.960 [97/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:06:16.217 [98/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:06:16.217 [99/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:06:16.217 [100/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:06:16.217 [101/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:06:16.217 [102/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:06:16.217 [103/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:06:16.217 [104/267] Linking static target lib/librte_mbuf.a 00:06:16.475 [105/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:06:16.475 [106/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:06:16.475 [107/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:06:16.475 [108/267] Linking static target lib/librte_meter.a 00:06:16.475 [109/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:06:16.475 [110/267] Linking static target lib/librte_net.a 00:06:16.731 [111/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:06:16.731 [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:06:16.731 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:06:16.731 [114/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:06:16.989 [115/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:06:16.989 [116/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:06:16.989 [117/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:06:16.989 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:06:17.247 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:06:17.247 [120/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:06:17.247 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:06:17.507 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:06:17.507 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:06:17.507 [124/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:06:17.507 [125/267] Linking static target lib/librte_pci.a 00:06:17.507 [126/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:06:17.507 [127/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:06:17.764 [128/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:06:17.764 [129/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:06:17.764 [130/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:06:17.764 [131/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:06:17.764 [132/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:06:17.764 [133/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:06:17.764 [134/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:06:17.764 [135/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:06:17.764 [136/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:06:18.022 [137/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:06:18.022 [138/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:06:18.022 [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:06:18.022 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:06:18.022 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:06:18.022 [142/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:06:18.022 [143/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:06:18.022 [144/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:06:18.280 [145/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:06:18.280 [146/267] Linking static target lib/librte_cmdline.a 00:06:18.537 [147/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:06:18.537 [148/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:06:18.537 [149/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:06:18.537 [150/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:06:18.537 [151/267] Linking static target lib/librte_timer.a 00:06:18.537 [152/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:06:18.537 [153/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:06:19.103 [154/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:06:19.103 [155/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:06:19.103 [156/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:06:19.103 [157/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:06:19.103 [158/267] Linking static target lib/librte_hash.a 00:06:19.103 [159/267] Linking static target lib/librte_ethdev.a 00:06:19.103 [160/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:06:19.103 [161/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:06:19.103 [162/267] Linking static target lib/librte_compressdev.a 00:06:19.103 [163/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:06:19.103 [164/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:06:19.361 [165/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:06:19.361 [166/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:06:19.361 [167/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:06:19.361 [168/267] Linking static target lib/librte_dmadev.a 00:06:19.619 [169/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:06:19.619 [170/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:06:19.619 [171/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:06:19.876 [172/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:06:19.877 [173/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:06:19.877 [174/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:20.134 [175/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:06:20.134 [176/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:06:20.134 [177/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:06:20.134 [178/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:06:20.134 [179/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:06:20.134 [180/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:06:20.393 [181/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:20.393 [182/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:06:20.393 [183/267] Linking static target lib/librte_power.a 00:06:20.393 [184/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:06:20.651 [185/267] Linking static target lib/librte_cryptodev.a 00:06:20.651 [186/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:06:20.651 [187/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:06:20.651 [188/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:06:20.651 [189/267] Linking static target lib/librte_reorder.a 00:06:20.908 [190/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:06:20.908 [191/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:06:20.908 [192/267] Linking static target lib/librte_security.a 00:06:21.166 [193/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:06:21.166 [194/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:06:21.424 [195/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:06:21.424 [196/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:06:21.682 [197/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:06:21.682 [198/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:06:21.682 [199/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:06:21.682 [200/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:06:21.940 [201/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:06:21.940 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:06:21.940 [203/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:06:22.197 [204/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:06:22.197 [205/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:06:22.197 [206/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:06:22.197 [207/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:06:22.454 [208/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:06:22.454 [209/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:06:22.454 [210/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:22.711 [211/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:06:22.711 [212/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:06:22.711 [213/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:06:22.711 [214/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:06:22.711 [215/267] Linking static target drivers/librte_bus_vdev.a 00:06:22.711 [216/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:06:22.711 [217/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:06:22.711 [218/267] Linking static target drivers/librte_bus_pci.a 00:06:22.711 [219/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:06:22.711 [220/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:06:23.071 [221/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:06:23.071 [222/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:06:23.071 [223/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:06:23.071 [224/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:23.071 [225/267] Linking static target drivers/librte_mempool_ring.a 00:06:23.071 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:06:23.644 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:06:24.577 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:06:24.577 [229/267] Linking target lib/librte_eal.so.24.1 00:06:24.577 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:06:24.835 [231/267] Linking target lib/librte_meter.so.24.1 00:06:24.835 [232/267] Linking target lib/librte_timer.so.24.1 00:06:24.835 [233/267] Linking target lib/librte_ring.so.24.1 00:06:24.835 [234/267] Linking target drivers/librte_bus_vdev.so.24.1 00:06:24.835 [235/267] Linking target lib/librte_dmadev.so.24.1 00:06:24.836 [236/267] Linking target lib/librte_pci.so.24.1 00:06:24.836 [237/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:06:24.836 [238/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:06:24.836 [239/267] Linking target lib/librte_rcu.so.24.1 00:06:24.836 [240/267] Linking target lib/librte_mempool.so.24.1 00:06:24.836 [241/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:06:24.836 [242/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:06:24.836 [243/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:06:24.836 [244/267] Linking target drivers/librte_bus_pci.so.24.1 00:06:25.093 [245/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:06:25.093 [246/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:06:25.093 [247/267] Linking target drivers/librte_mempool_ring.so.24.1 00:06:25.093 [248/267] Linking target lib/librte_mbuf.so.24.1 00:06:25.093 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:06:25.093 [250/267] Linking target lib/librte_compressdev.so.24.1 00:06:25.093 [251/267] Linking target lib/librte_net.so.24.1 00:06:25.093 [252/267] Linking target lib/librte_reorder.so.24.1 00:06:25.093 [253/267] Linking target lib/librte_cryptodev.so.24.1 00:06:25.351 [254/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:06:25.351 [255/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:06:25.351 [256/267] Linking target lib/librte_cmdline.so.24.1 00:06:25.351 [257/267] Linking target lib/librte_security.so.24.1 00:06:25.351 [258/267] Linking target lib/librte_hash.so.24.1 00:06:25.608 [259/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:06:25.608 [260/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:25.866 [261/267] Linking target lib/librte_ethdev.so.24.1 00:06:25.866 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:06:25.866 [263/267] Linking target lib/librte_power.so.24.1 00:06:27.240 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:06:27.240 [265/267] Linking static target lib/librte_vhost.a 00:06:28.616 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:06:28.616 [267/267] Linking target lib/librte_vhost.so.24.1 00:06:28.616 INFO: autodetecting backend as ninja 00:06:28.616 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:06:46.689 CC lib/ut_mock/mock.o 00:06:46.689 CC lib/log/log.o 00:06:46.689 CC lib/ut/ut.o 00:06:46.689 CC lib/log/log_deprecated.o 00:06:46.689 CC lib/log/log_flags.o 00:06:46.689 LIB libspdk_ut.a 00:06:46.947 LIB libspdk_ut_mock.a 00:06:46.947 SO libspdk_ut.so.2.0 00:06:46.947 LIB libspdk_log.a 00:06:46.947 SO libspdk_ut_mock.so.6.0 00:06:46.947 SYMLINK libspdk_ut.so 00:06:46.947 SO libspdk_log.so.7.1 00:06:46.947 SYMLINK libspdk_ut_mock.so 00:06:46.947 SYMLINK libspdk_log.so 00:06:47.204 CC lib/util/base64.o 00:06:47.204 CC lib/util/bit_array.o 00:06:47.204 CC lib/util/cpuset.o 00:06:47.204 CC lib/util/crc16.o 00:06:47.204 CC lib/dma/dma.o 00:06:47.204 CC lib/util/crc32.o 00:06:47.204 CC lib/util/crc32c.o 00:06:47.204 CC lib/ioat/ioat.o 00:06:47.204 CXX lib/trace_parser/trace.o 00:06:47.204 CC lib/vfio_user/host/vfio_user_pci.o 00:06:47.204 CC lib/util/crc32_ieee.o 00:06:47.204 CC lib/util/crc64.o 00:06:47.204 CC lib/util/dif.o 00:06:47.204 CC lib/util/fd.o 00:06:47.204 CC lib/util/fd_group.o 00:06:47.461 LIB libspdk_dma.a 00:06:47.461 SO libspdk_dma.so.5.0 00:06:47.461 CC lib/util/file.o 00:06:47.461 SYMLINK libspdk_dma.so 00:06:47.461 CC lib/util/hexlify.o 00:06:47.461 CC lib/vfio_user/host/vfio_user.o 00:06:47.461 CC lib/util/iov.o 00:06:47.461 CC lib/util/math.o 00:06:47.461 CC lib/util/net.o 00:06:47.461 LIB libspdk_ioat.a 00:06:47.461 SO libspdk_ioat.so.7.0 00:06:47.461 CC lib/util/pipe.o 00:06:47.718 CC lib/util/strerror_tls.o 00:06:47.718 CC lib/util/string.o 00:06:47.718 SYMLINK libspdk_ioat.so 00:06:47.718 CC lib/util/uuid.o 00:06:47.718 CC lib/util/xor.o 00:06:47.718 CC lib/util/zipf.o 00:06:47.718 LIB libspdk_vfio_user.a 00:06:47.718 CC lib/util/md5.o 00:06:47.718 SO libspdk_vfio_user.so.5.0 00:06:47.718 SYMLINK libspdk_vfio_user.so 00:06:47.975 LIB libspdk_util.a 00:06:47.975 SO libspdk_util.so.10.1 00:06:47.975 LIB libspdk_trace_parser.a 00:06:48.233 SO libspdk_trace_parser.so.6.0 00:06:48.233 SYMLINK libspdk_util.so 00:06:48.233 SYMLINK libspdk_trace_parser.so 00:06:48.233 CC lib/conf/conf.o 00:06:48.233 CC lib/rdma_utils/rdma_utils.o 00:06:48.491 CC lib/idxd/idxd.o 00:06:48.491 CC lib/idxd/idxd_user.o 00:06:48.491 CC lib/json/json_parse.o 00:06:48.491 CC lib/json/json_util.o 00:06:48.491 CC lib/json/json_write.o 00:06:48.491 CC lib/idxd/idxd_kernel.o 00:06:48.491 CC lib/env_dpdk/env.o 00:06:48.491 CC lib/vmd/vmd.o 00:06:48.491 CC lib/env_dpdk/memory.o 00:06:48.491 CC lib/vmd/led.o 00:06:48.749 CC lib/env_dpdk/pci.o 00:06:48.749 LIB libspdk_rdma_utils.a 00:06:48.749 LIB libspdk_conf.a 00:06:48.749 SO libspdk_rdma_utils.so.1.0 00:06:48.749 SO libspdk_conf.so.6.0 00:06:48.749 LIB libspdk_json.a 00:06:48.749 SYMLINK libspdk_rdma_utils.so 00:06:48.749 SYMLINK libspdk_conf.so 00:06:48.749 CC lib/env_dpdk/init.o 00:06:48.749 CC lib/env_dpdk/threads.o 00:06:48.749 CC lib/env_dpdk/pci_ioat.o 00:06:48.749 SO libspdk_json.so.6.0 00:06:48.749 SYMLINK libspdk_json.so 00:06:49.006 CC lib/env_dpdk/pci_virtio.o 00:06:49.006 CC lib/rdma_provider/common.o 00:06:49.006 CC lib/env_dpdk/pci_vmd.o 00:06:49.006 CC lib/jsonrpc/jsonrpc_server.o 00:06:49.006 LIB libspdk_idxd.a 00:06:49.006 CC lib/env_dpdk/pci_idxd.o 00:06:49.006 SO libspdk_idxd.so.12.1 00:06:49.006 LIB libspdk_vmd.a 00:06:49.006 CC lib/env_dpdk/pci_event.o 00:06:49.006 CC lib/env_dpdk/sigbus_handler.o 00:06:49.006 SO libspdk_vmd.so.6.0 00:06:49.263 SYMLINK libspdk_idxd.so 00:06:49.263 CC lib/env_dpdk/pci_dpdk.o 00:06:49.263 CC lib/rdma_provider/rdma_provider_verbs.o 00:06:49.263 CC lib/env_dpdk/pci_dpdk_2207.o 00:06:49.263 SYMLINK libspdk_vmd.so 00:06:49.263 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:06:49.263 CC lib/env_dpdk/pci_dpdk_2211.o 00:06:49.263 CC lib/jsonrpc/jsonrpc_client.o 00:06:49.263 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:06:49.263 LIB libspdk_rdma_provider.a 00:06:49.263 SO libspdk_rdma_provider.so.7.0 00:06:49.524 LIB libspdk_jsonrpc.a 00:06:49.524 SYMLINK libspdk_rdma_provider.so 00:06:49.524 SO libspdk_jsonrpc.so.6.0 00:06:49.524 SYMLINK libspdk_jsonrpc.so 00:06:49.781 CC lib/rpc/rpc.o 00:06:49.781 LIB libspdk_env_dpdk.a 00:06:50.045 LIB libspdk_rpc.a 00:06:50.045 SO libspdk_env_dpdk.so.15.1 00:06:50.045 SO libspdk_rpc.so.6.0 00:06:50.045 SYMLINK libspdk_rpc.so 00:06:50.045 SYMLINK libspdk_env_dpdk.so 00:06:50.350 CC lib/keyring/keyring_rpc.o 00:06:50.350 CC lib/notify/notify.o 00:06:50.350 CC lib/keyring/keyring.o 00:06:50.350 CC lib/notify/notify_rpc.o 00:06:50.350 CC lib/trace/trace.o 00:06:50.350 CC lib/trace/trace_rpc.o 00:06:50.350 CC lib/trace/trace_flags.o 00:06:50.350 LIB libspdk_notify.a 00:06:50.350 SO libspdk_notify.so.6.0 00:06:50.608 SYMLINK libspdk_notify.so 00:06:50.608 LIB libspdk_keyring.a 00:06:50.608 LIB libspdk_trace.a 00:06:50.608 SO libspdk_keyring.so.2.0 00:06:50.608 SO libspdk_trace.so.11.0 00:06:50.608 SYMLINK libspdk_keyring.so 00:06:50.608 SYMLINK libspdk_trace.so 00:06:50.866 CC lib/thread/thread.o 00:06:50.866 CC lib/thread/iobuf.o 00:06:50.866 CC lib/sock/sock_rpc.o 00:06:50.866 CC lib/sock/sock.o 00:06:51.432 LIB libspdk_sock.a 00:06:51.432 SO libspdk_sock.so.10.0 00:06:51.690 SYMLINK libspdk_sock.so 00:06:51.690 CC lib/nvme/nvme_ctrlr_cmd.o 00:06:51.690 CC lib/nvme/nvme_ctrlr.o 00:06:51.690 CC lib/nvme/nvme_fabric.o 00:06:51.690 CC lib/nvme/nvme_ns_cmd.o 00:06:51.690 CC lib/nvme/nvme_pcie.o 00:06:51.690 CC lib/nvme/nvme_qpair.o 00:06:51.690 CC lib/nvme/nvme_ns.o 00:06:51.690 CC lib/nvme/nvme.o 00:06:51.690 CC lib/nvme/nvme_pcie_common.o 00:06:52.622 CC lib/nvme/nvme_quirks.o 00:06:52.622 CC lib/nvme/nvme_transport.o 00:06:52.622 CC lib/nvme/nvme_discovery.o 00:06:52.622 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:06:52.622 LIB libspdk_thread.a 00:06:52.622 SO libspdk_thread.so.11.0 00:06:52.622 SYMLINK libspdk_thread.so 00:06:52.622 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:06:52.622 CC lib/nvme/nvme_tcp.o 00:06:52.879 CC lib/nvme/nvme_opal.o 00:06:52.879 CC lib/nvme/nvme_io_msg.o 00:06:52.879 CC lib/nvme/nvme_poll_group.o 00:06:52.879 CC lib/nvme/nvme_zns.o 00:06:53.136 CC lib/nvme/nvme_stubs.o 00:06:53.136 CC lib/nvme/nvme_auth.o 00:06:53.394 CC lib/accel/accel.o 00:06:53.394 CC lib/accel/accel_rpc.o 00:06:53.394 CC lib/nvme/nvme_cuse.o 00:06:53.394 CC lib/accel/accel_sw.o 00:06:53.394 CC lib/nvme/nvme_rdma.o 00:06:53.651 CC lib/blob/blobstore.o 00:06:53.651 CC lib/init/json_config.o 00:06:53.909 CC lib/virtio/virtio.o 00:06:53.909 CC lib/fsdev/fsdev.o 00:06:53.909 CC lib/init/subsystem.o 00:06:54.167 CC lib/init/subsystem_rpc.o 00:06:54.167 CC lib/virtio/virtio_vhost_user.o 00:06:54.167 CC lib/virtio/virtio_vfio_user.o 00:06:54.167 CC lib/virtio/virtio_pci.o 00:06:54.167 CC lib/blob/request.o 00:06:54.167 CC lib/init/rpc.o 00:06:54.167 CC lib/blob/zeroes.o 00:06:54.425 CC lib/fsdev/fsdev_io.o 00:06:54.425 LIB libspdk_init.a 00:06:54.425 LIB libspdk_accel.a 00:06:54.425 SO libspdk_init.so.6.0 00:06:54.425 CC lib/fsdev/fsdev_rpc.o 00:06:54.425 SO libspdk_accel.so.16.0 00:06:54.425 CC lib/blob/blob_bs_dev.o 00:06:54.425 LIB libspdk_virtio.a 00:06:54.425 SYMLINK libspdk_init.so 00:06:54.425 SO libspdk_virtio.so.7.0 00:06:54.425 SYMLINK libspdk_accel.so 00:06:54.684 SYMLINK libspdk_virtio.so 00:06:54.684 CC lib/event/app.o 00:06:54.684 CC lib/event/app_rpc.o 00:06:54.684 CC lib/event/log_rpc.o 00:06:54.684 CC lib/event/reactor.o 00:06:54.684 CC lib/event/scheduler_static.o 00:06:54.684 CC lib/bdev/bdev.o 00:06:54.684 CC lib/bdev/bdev_rpc.o 00:06:54.684 LIB libspdk_fsdev.a 00:06:54.684 SO libspdk_fsdev.so.2.0 00:06:54.943 CC lib/bdev/bdev_zone.o 00:06:54.943 CC lib/bdev/part.o 00:06:54.943 SYMLINK libspdk_fsdev.so 00:06:54.943 CC lib/bdev/scsi_nvme.o 00:06:54.943 LIB libspdk_nvme.a 00:06:55.202 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:06:55.202 LIB libspdk_event.a 00:06:55.202 SO libspdk_nvme.so.15.0 00:06:55.202 SO libspdk_event.so.14.0 00:06:55.202 SYMLINK libspdk_event.so 00:06:55.460 SYMLINK libspdk_nvme.so 00:06:55.744 LIB libspdk_fuse_dispatcher.a 00:06:55.744 SO libspdk_fuse_dispatcher.so.1.0 00:06:56.002 SYMLINK libspdk_fuse_dispatcher.so 00:06:57.374 LIB libspdk_blob.a 00:06:57.374 SO libspdk_blob.so.12.0 00:06:57.374 SYMLINK libspdk_blob.so 00:06:57.632 CC lib/blobfs/blobfs.o 00:06:57.632 CC lib/blobfs/tree.o 00:06:57.632 CC lib/lvol/lvol.o 00:06:57.889 LIB libspdk_bdev.a 00:06:57.889 SO libspdk_bdev.so.17.0 00:06:57.889 SYMLINK libspdk_bdev.so 00:06:58.146 CC lib/ublk/ublk.o 00:06:58.146 CC lib/ublk/ublk_rpc.o 00:06:58.146 CC lib/ftl/ftl_core.o 00:06:58.146 CC lib/ftl/ftl_init.o 00:06:58.146 CC lib/nvmf/ctrlr.o 00:06:58.146 CC lib/ftl/ftl_layout.o 00:06:58.146 CC lib/scsi/dev.o 00:06:58.146 CC lib/nbd/nbd.o 00:06:58.146 CC lib/nbd/nbd_rpc.o 00:06:58.403 CC lib/ftl/ftl_debug.o 00:06:58.403 CC lib/scsi/lun.o 00:06:58.403 LIB libspdk_blobfs.a 00:06:58.403 SO libspdk_blobfs.so.11.0 00:06:58.403 CC lib/ftl/ftl_io.o 00:06:58.403 CC lib/ftl/ftl_sb.o 00:06:58.403 SYMLINK libspdk_blobfs.so 00:06:58.403 CC lib/ftl/ftl_l2p.o 00:06:58.403 CC lib/ftl/ftl_l2p_flat.o 00:06:58.403 CC lib/nvmf/ctrlr_discovery.o 00:06:58.660 CC lib/scsi/port.o 00:06:58.660 CC lib/ftl/ftl_nv_cache.o 00:06:58.660 CC lib/ftl/ftl_band.o 00:06:58.660 CC lib/scsi/scsi.o 00:06:58.660 LIB libspdk_nbd.a 00:06:58.660 CC lib/scsi/scsi_bdev.o 00:06:58.660 SO libspdk_nbd.so.7.0 00:06:58.660 CC lib/ftl/ftl_band_ops.o 00:06:58.660 SYMLINK libspdk_nbd.so 00:06:58.660 CC lib/ftl/ftl_writer.o 00:06:58.660 CC lib/scsi/scsi_pr.o 00:06:58.660 LIB libspdk_lvol.a 00:06:58.660 LIB libspdk_ublk.a 00:06:58.917 SO libspdk_lvol.so.11.0 00:06:58.917 SO libspdk_ublk.so.3.0 00:06:58.917 SYMLINK libspdk_lvol.so 00:06:58.917 CC lib/nvmf/ctrlr_bdev.o 00:06:58.917 CC lib/ftl/ftl_rq.o 00:06:58.917 SYMLINK libspdk_ublk.so 00:06:58.917 CC lib/nvmf/subsystem.o 00:06:58.917 CC lib/ftl/ftl_reloc.o 00:06:58.917 CC lib/ftl/ftl_l2p_cache.o 00:06:58.917 CC lib/scsi/scsi_rpc.o 00:06:59.186 CC lib/scsi/task.o 00:06:59.186 CC lib/nvmf/nvmf.o 00:06:59.186 CC lib/nvmf/nvmf_rpc.o 00:06:59.186 CC lib/nvmf/transport.o 00:06:59.186 CC lib/ftl/ftl_p2l.o 00:06:59.186 LIB libspdk_scsi.a 00:06:59.443 SO libspdk_scsi.so.9.0 00:06:59.443 CC lib/ftl/ftl_p2l_log.o 00:06:59.443 CC lib/ftl/mngt/ftl_mngt.o 00:06:59.443 SYMLINK libspdk_scsi.so 00:06:59.443 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:06:59.443 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:06:59.700 CC lib/nvmf/tcp.o 00:06:59.700 CC lib/ftl/mngt/ftl_mngt_startup.o 00:06:59.700 CC lib/ftl/mngt/ftl_mngt_md.o 00:06:59.700 CC lib/ftl/mngt/ftl_mngt_misc.o 00:06:59.700 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:06:59.700 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:06:59.957 CC lib/nvmf/stubs.o 00:06:59.957 CC lib/ftl/mngt/ftl_mngt_band.o 00:06:59.957 CC lib/nvmf/mdns_server.o 00:06:59.957 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:06:59.957 CC lib/iscsi/conn.o 00:06:59.957 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:06:59.957 CC lib/nvmf/rdma.o 00:06:59.957 CC lib/vhost/vhost.o 00:06:59.957 CC lib/iscsi/init_grp.o 00:07:00.214 CC lib/iscsi/iscsi.o 00:07:00.214 CC lib/iscsi/param.o 00:07:00.214 CC lib/iscsi/portal_grp.o 00:07:00.214 CC lib/vhost/vhost_rpc.o 00:07:00.214 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:07:00.214 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:07:00.470 CC lib/ftl/utils/ftl_conf.o 00:07:00.470 CC lib/ftl/utils/ftl_md.o 00:07:00.470 CC lib/iscsi/tgt_node.o 00:07:00.726 CC lib/iscsi/iscsi_subsystem.o 00:07:00.726 CC lib/iscsi/iscsi_rpc.o 00:07:00.726 CC lib/iscsi/task.o 00:07:00.726 CC lib/vhost/vhost_scsi.o 00:07:00.726 CC lib/nvmf/auth.o 00:07:00.983 CC lib/vhost/vhost_blk.o 00:07:00.983 CC lib/ftl/utils/ftl_mempool.o 00:07:00.983 CC lib/ftl/utils/ftl_bitmap.o 00:07:00.983 CC lib/vhost/rte_vhost_user.o 00:07:00.983 CC lib/ftl/utils/ftl_property.o 00:07:00.983 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:07:00.983 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:07:00.983 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:07:01.315 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:07:01.315 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:07:01.315 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:07:01.315 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:07:01.315 CC lib/ftl/upgrade/ftl_sb_v3.o 00:07:01.315 CC lib/ftl/upgrade/ftl_sb_v5.o 00:07:01.578 CC lib/ftl/nvc/ftl_nvc_dev.o 00:07:01.578 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:07:01.578 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:07:01.578 LIB libspdk_iscsi.a 00:07:01.578 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:07:01.578 CC lib/ftl/base/ftl_base_dev.o 00:07:01.578 SO libspdk_iscsi.so.8.0 00:07:01.578 CC lib/ftl/base/ftl_base_bdev.o 00:07:01.578 CC lib/ftl/ftl_trace.o 00:07:01.836 SYMLINK libspdk_iscsi.so 00:07:01.836 LIB libspdk_ftl.a 00:07:01.836 LIB libspdk_vhost.a 00:07:02.095 SO libspdk_vhost.so.8.0 00:07:02.095 SO libspdk_ftl.so.9.0 00:07:02.095 SYMLINK libspdk_vhost.so 00:07:02.352 LIB libspdk_nvmf.a 00:07:02.352 SYMLINK libspdk_ftl.so 00:07:02.352 SO libspdk_nvmf.so.20.0 00:07:02.613 SYMLINK libspdk_nvmf.so 00:07:02.869 CC module/env_dpdk/env_dpdk_rpc.o 00:07:02.869 CC module/fsdev/aio/fsdev_aio.o 00:07:02.869 CC module/accel/ioat/accel_ioat.o 00:07:02.870 CC module/accel/iaa/accel_iaa.o 00:07:02.870 CC module/sock/posix/posix.o 00:07:02.870 CC module/blob/bdev/blob_bdev.o 00:07:02.870 CC module/keyring/file/keyring.o 00:07:02.870 CC module/accel/dsa/accel_dsa.o 00:07:02.870 CC module/scheduler/dynamic/scheduler_dynamic.o 00:07:02.870 CC module/accel/error/accel_error.o 00:07:02.870 LIB libspdk_env_dpdk_rpc.a 00:07:02.870 SO libspdk_env_dpdk_rpc.so.6.0 00:07:03.127 SYMLINK libspdk_env_dpdk_rpc.so 00:07:03.127 CC module/accel/ioat/accel_ioat_rpc.o 00:07:03.127 CC module/accel/iaa/accel_iaa_rpc.o 00:07:03.127 CC module/keyring/file/keyring_rpc.o 00:07:03.127 CC module/fsdev/aio/fsdev_aio_rpc.o 00:07:03.127 LIB libspdk_scheduler_dynamic.a 00:07:03.127 SO libspdk_scheduler_dynamic.so.4.0 00:07:03.127 CC module/accel/error/accel_error_rpc.o 00:07:03.127 LIB libspdk_accel_iaa.a 00:07:03.127 LIB libspdk_accel_ioat.a 00:07:03.127 LIB libspdk_blob_bdev.a 00:07:03.127 SO libspdk_accel_iaa.so.3.0 00:07:03.127 SO libspdk_accel_ioat.so.6.0 00:07:03.127 SYMLINK libspdk_scheduler_dynamic.so 00:07:03.127 LIB libspdk_keyring_file.a 00:07:03.127 SO libspdk_blob_bdev.so.12.0 00:07:03.127 CC module/accel/dsa/accel_dsa_rpc.o 00:07:03.127 SO libspdk_keyring_file.so.2.0 00:07:03.127 SYMLINK libspdk_accel_iaa.so 00:07:03.127 LIB libspdk_accel_error.a 00:07:03.127 SYMLINK libspdk_accel_ioat.so 00:07:03.385 SYMLINK libspdk_blob_bdev.so 00:07:03.385 CC module/fsdev/aio/linux_aio_mgr.o 00:07:03.385 SYMLINK libspdk_keyring_file.so 00:07:03.385 SO libspdk_accel_error.so.2.0 00:07:03.385 SYMLINK libspdk_accel_error.so 00:07:03.385 LIB libspdk_accel_dsa.a 00:07:03.385 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:07:03.385 SO libspdk_accel_dsa.so.5.0 00:07:03.385 CC module/scheduler/gscheduler/gscheduler.o 00:07:03.385 CC module/keyring/linux/keyring.o 00:07:03.385 SYMLINK libspdk_accel_dsa.so 00:07:03.385 CC module/keyring/linux/keyring_rpc.o 00:07:03.385 LIB libspdk_scheduler_dpdk_governor.a 00:07:03.385 CC module/bdev/delay/vbdev_delay.o 00:07:03.643 SO libspdk_scheduler_dpdk_governor.so.4.0 00:07:03.643 CC module/blobfs/bdev/blobfs_bdev.o 00:07:03.643 LIB libspdk_scheduler_gscheduler.a 00:07:03.643 CC module/bdev/error/vbdev_error.o 00:07:03.643 CC module/bdev/error/vbdev_error_rpc.o 00:07:03.643 SO libspdk_scheduler_gscheduler.so.4.0 00:07:03.643 SYMLINK libspdk_scheduler_dpdk_governor.so 00:07:03.643 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:07:03.643 LIB libspdk_keyring_linux.a 00:07:03.643 CC module/bdev/gpt/gpt.o 00:07:03.643 SYMLINK libspdk_scheduler_gscheduler.so 00:07:03.643 SO libspdk_keyring_linux.so.1.0 00:07:03.643 CC module/bdev/gpt/vbdev_gpt.o 00:07:03.643 LIB libspdk_fsdev_aio.a 00:07:03.643 SYMLINK libspdk_keyring_linux.so 00:07:03.643 SO libspdk_fsdev_aio.so.1.0 00:07:03.643 LIB libspdk_sock_posix.a 00:07:03.643 SO libspdk_sock_posix.so.6.0 00:07:03.643 LIB libspdk_blobfs_bdev.a 00:07:03.643 SYMLINK libspdk_fsdev_aio.so 00:07:03.901 LIB libspdk_bdev_error.a 00:07:03.901 SO libspdk_blobfs_bdev.so.6.0 00:07:03.901 SO libspdk_bdev_error.so.6.0 00:07:03.901 CC module/bdev/lvol/vbdev_lvol.o 00:07:03.901 SYMLINK libspdk_sock_posix.so 00:07:03.901 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:07:03.901 CC module/bdev/null/bdev_null.o 00:07:03.901 CC module/bdev/malloc/bdev_malloc.o 00:07:03.901 SYMLINK libspdk_blobfs_bdev.so 00:07:03.901 CC module/bdev/delay/vbdev_delay_rpc.o 00:07:03.901 LIB libspdk_bdev_gpt.a 00:07:03.901 SYMLINK libspdk_bdev_error.so 00:07:03.901 CC module/bdev/malloc/bdev_malloc_rpc.o 00:07:03.901 SO libspdk_bdev_gpt.so.6.0 00:07:03.901 CC module/bdev/nvme/bdev_nvme.o 00:07:03.901 CC module/bdev/passthru/vbdev_passthru.o 00:07:03.901 SYMLINK libspdk_bdev_gpt.so 00:07:03.901 CC module/bdev/raid/bdev_raid.o 00:07:03.901 LIB libspdk_bdev_delay.a 00:07:04.159 SO libspdk_bdev_delay.so.6.0 00:07:04.159 CC module/bdev/split/vbdev_split.o 00:07:04.159 CC module/bdev/null/bdev_null_rpc.o 00:07:04.159 SYMLINK libspdk_bdev_delay.so 00:07:04.159 CC module/bdev/split/vbdev_split_rpc.o 00:07:04.159 CC module/bdev/zone_block/vbdev_zone_block.o 00:07:04.160 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:07:04.160 LIB libspdk_bdev_malloc.a 00:07:04.160 LIB libspdk_bdev_null.a 00:07:04.160 SO libspdk_bdev_malloc.so.6.0 00:07:04.160 LIB libspdk_bdev_lvol.a 00:07:04.417 SO libspdk_bdev_null.so.6.0 00:07:04.417 SO libspdk_bdev_lvol.so.6.0 00:07:04.417 LIB libspdk_bdev_split.a 00:07:04.417 SO libspdk_bdev_split.so.6.0 00:07:04.417 LIB libspdk_bdev_passthru.a 00:07:04.417 SYMLINK libspdk_bdev_malloc.so 00:07:04.417 SYMLINK libspdk_bdev_null.so 00:07:04.417 CC module/bdev/xnvme/bdev_xnvme.o 00:07:04.417 CC module/bdev/raid/bdev_raid_rpc.o 00:07:04.417 SYMLINK libspdk_bdev_lvol.so 00:07:04.417 SO libspdk_bdev_passthru.so.6.0 00:07:04.417 SYMLINK libspdk_bdev_split.so 00:07:04.417 CC module/bdev/aio/bdev_aio.o 00:07:04.417 SYMLINK libspdk_bdev_passthru.so 00:07:04.417 CC module/bdev/raid/bdev_raid_sb.o 00:07:04.417 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:07:04.417 CC module/bdev/ftl/bdev_ftl.o 00:07:04.417 CC module/bdev/iscsi/bdev_iscsi.o 00:07:04.417 CC module/bdev/virtio/bdev_virtio_scsi.o 00:07:04.675 CC module/bdev/ftl/bdev_ftl_rpc.o 00:07:04.675 LIB libspdk_bdev_zone_block.a 00:07:04.675 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:07:04.675 SO libspdk_bdev_zone_block.so.6.0 00:07:04.675 CC module/bdev/raid/raid0.o 00:07:04.675 CC module/bdev/aio/bdev_aio_rpc.o 00:07:04.675 SYMLINK libspdk_bdev_zone_block.so 00:07:04.675 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:07:04.675 CC module/bdev/raid/raid1.o 00:07:04.675 LIB libspdk_bdev_xnvme.a 00:07:04.933 LIB libspdk_bdev_ftl.a 00:07:04.933 SO libspdk_bdev_xnvme.so.3.0 00:07:04.933 SO libspdk_bdev_ftl.so.6.0 00:07:04.933 CC module/bdev/nvme/bdev_nvme_rpc.o 00:07:04.933 LIB libspdk_bdev_aio.a 00:07:04.933 LIB libspdk_bdev_iscsi.a 00:07:04.933 SYMLINK libspdk_bdev_xnvme.so 00:07:04.933 SYMLINK libspdk_bdev_ftl.so 00:07:04.933 CC module/bdev/raid/concat.o 00:07:04.933 SO libspdk_bdev_aio.so.6.0 00:07:04.933 SO libspdk_bdev_iscsi.so.6.0 00:07:04.933 CC module/bdev/virtio/bdev_virtio_blk.o 00:07:04.933 CC module/bdev/virtio/bdev_virtio_rpc.o 00:07:04.933 SYMLINK libspdk_bdev_iscsi.so 00:07:04.933 CC module/bdev/nvme/nvme_rpc.o 00:07:04.933 SYMLINK libspdk_bdev_aio.so 00:07:04.933 CC module/bdev/nvme/bdev_mdns_client.o 00:07:04.933 CC module/bdev/nvme/vbdev_opal.o 00:07:04.933 CC module/bdev/nvme/vbdev_opal_rpc.o 00:07:05.241 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:07:05.241 LIB libspdk_bdev_raid.a 00:07:05.241 SO libspdk_bdev_raid.so.6.0 00:07:05.241 LIB libspdk_bdev_virtio.a 00:07:05.241 SYMLINK libspdk_bdev_raid.so 00:07:05.241 SO libspdk_bdev_virtio.so.6.0 00:07:05.241 SYMLINK libspdk_bdev_virtio.so 00:07:06.219 LIB libspdk_bdev_nvme.a 00:07:06.219 SO libspdk_bdev_nvme.so.7.1 00:07:06.219 SYMLINK libspdk_bdev_nvme.so 00:07:06.843 CC module/event/subsystems/vmd/vmd.o 00:07:06.844 CC module/event/subsystems/vmd/vmd_rpc.o 00:07:06.844 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:07:06.844 CC module/event/subsystems/fsdev/fsdev.o 00:07:06.844 CC module/event/subsystems/iobuf/iobuf.o 00:07:06.844 CC module/event/subsystems/keyring/keyring.o 00:07:06.844 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:07:06.844 CC module/event/subsystems/sock/sock.o 00:07:06.844 CC module/event/subsystems/scheduler/scheduler.o 00:07:06.844 LIB libspdk_event_scheduler.a 00:07:06.844 LIB libspdk_event_sock.a 00:07:06.844 LIB libspdk_event_vhost_blk.a 00:07:06.844 LIB libspdk_event_iobuf.a 00:07:06.844 LIB libspdk_event_keyring.a 00:07:06.844 LIB libspdk_event_vmd.a 00:07:06.844 SO libspdk_event_scheduler.so.4.0 00:07:06.844 SO libspdk_event_vhost_blk.so.3.0 00:07:06.844 SO libspdk_event_sock.so.5.0 00:07:06.844 SO libspdk_event_iobuf.so.3.0 00:07:06.844 SO libspdk_event_keyring.so.1.0 00:07:06.844 SO libspdk_event_vmd.so.6.0 00:07:06.844 LIB libspdk_event_fsdev.a 00:07:06.844 SYMLINK libspdk_event_scheduler.so 00:07:06.844 SO libspdk_event_fsdev.so.1.0 00:07:06.844 SYMLINK libspdk_event_keyring.so 00:07:06.844 SYMLINK libspdk_event_sock.so 00:07:06.844 SYMLINK libspdk_event_iobuf.so 00:07:06.844 SYMLINK libspdk_event_vhost_blk.so 00:07:06.844 SYMLINK libspdk_event_vmd.so 00:07:06.844 SYMLINK libspdk_event_fsdev.so 00:07:07.101 CC module/event/subsystems/accel/accel.o 00:07:07.358 LIB libspdk_event_accel.a 00:07:07.358 SO libspdk_event_accel.so.6.0 00:07:07.358 SYMLINK libspdk_event_accel.so 00:07:07.615 CC module/event/subsystems/bdev/bdev.o 00:07:07.615 LIB libspdk_event_bdev.a 00:07:07.872 SO libspdk_event_bdev.so.6.0 00:07:07.872 SYMLINK libspdk_event_bdev.so 00:07:07.872 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:07:07.872 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:07:07.872 CC module/event/subsystems/scsi/scsi.o 00:07:07.872 CC module/event/subsystems/nbd/nbd.o 00:07:07.872 CC module/event/subsystems/ublk/ublk.o 00:07:08.129 LIB libspdk_event_scsi.a 00:07:08.129 SO libspdk_event_scsi.so.6.0 00:07:08.129 LIB libspdk_event_ublk.a 00:07:08.129 LIB libspdk_event_nbd.a 00:07:08.129 SO libspdk_event_ublk.so.3.0 00:07:08.129 SO libspdk_event_nbd.so.6.0 00:07:08.129 SYMLINK libspdk_event_scsi.so 00:07:08.129 SYMLINK libspdk_event_ublk.so 00:07:08.130 LIB libspdk_event_nvmf.a 00:07:08.130 SYMLINK libspdk_event_nbd.so 00:07:08.130 SO libspdk_event_nvmf.so.6.0 00:07:08.387 SYMLINK libspdk_event_nvmf.so 00:07:08.387 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:07:08.387 CC module/event/subsystems/iscsi/iscsi.o 00:07:08.387 LIB libspdk_event_vhost_scsi.a 00:07:08.387 SO libspdk_event_vhost_scsi.so.3.0 00:07:08.387 LIB libspdk_event_iscsi.a 00:07:08.645 SO libspdk_event_iscsi.so.6.0 00:07:08.645 SYMLINK libspdk_event_vhost_scsi.so 00:07:08.645 SYMLINK libspdk_event_iscsi.so 00:07:08.645 SO libspdk.so.6.0 00:07:08.645 SYMLINK libspdk.so 00:07:08.902 CXX app/trace/trace.o 00:07:08.902 CC app/trace_record/trace_record.o 00:07:08.902 TEST_HEADER include/spdk/accel.h 00:07:08.902 TEST_HEADER include/spdk/accel_module.h 00:07:08.902 TEST_HEADER include/spdk/assert.h 00:07:08.902 TEST_HEADER include/spdk/barrier.h 00:07:08.902 TEST_HEADER include/spdk/base64.h 00:07:08.902 TEST_HEADER include/spdk/bdev.h 00:07:08.902 TEST_HEADER include/spdk/bdev_module.h 00:07:08.902 TEST_HEADER include/spdk/bdev_zone.h 00:07:08.902 TEST_HEADER include/spdk/bit_array.h 00:07:08.902 TEST_HEADER include/spdk/bit_pool.h 00:07:08.902 TEST_HEADER include/spdk/blob_bdev.h 00:07:08.902 TEST_HEADER include/spdk/blobfs_bdev.h 00:07:08.902 TEST_HEADER include/spdk/blobfs.h 00:07:08.902 TEST_HEADER include/spdk/blob.h 00:07:08.902 TEST_HEADER include/spdk/conf.h 00:07:08.902 TEST_HEADER include/spdk/config.h 00:07:08.902 CC examples/interrupt_tgt/interrupt_tgt.o 00:07:08.902 TEST_HEADER include/spdk/cpuset.h 00:07:08.902 TEST_HEADER include/spdk/crc16.h 00:07:08.902 TEST_HEADER include/spdk/crc32.h 00:07:08.902 TEST_HEADER include/spdk/crc64.h 00:07:08.902 TEST_HEADER include/spdk/dif.h 00:07:08.902 TEST_HEADER include/spdk/dma.h 00:07:08.902 TEST_HEADER include/spdk/endian.h 00:07:08.902 TEST_HEADER include/spdk/env_dpdk.h 00:07:08.902 TEST_HEADER include/spdk/env.h 00:07:08.902 TEST_HEADER include/spdk/event.h 00:07:08.902 TEST_HEADER include/spdk/fd_group.h 00:07:08.902 TEST_HEADER include/spdk/fd.h 00:07:08.902 TEST_HEADER include/spdk/file.h 00:07:08.902 TEST_HEADER include/spdk/fsdev.h 00:07:08.902 TEST_HEADER include/spdk/fsdev_module.h 00:07:08.902 TEST_HEADER include/spdk/ftl.h 00:07:08.902 TEST_HEADER include/spdk/fuse_dispatcher.h 00:07:08.902 TEST_HEADER include/spdk/gpt_spec.h 00:07:08.902 TEST_HEADER include/spdk/hexlify.h 00:07:08.902 CC examples/util/zipf/zipf.o 00:07:08.902 CC examples/ioat/perf/perf.o 00:07:08.902 TEST_HEADER include/spdk/histogram_data.h 00:07:08.902 TEST_HEADER include/spdk/idxd.h 00:07:08.902 CC test/thread/poller_perf/poller_perf.o 00:07:08.902 TEST_HEADER include/spdk/idxd_spec.h 00:07:08.902 TEST_HEADER include/spdk/init.h 00:07:08.902 TEST_HEADER include/spdk/ioat.h 00:07:08.902 TEST_HEADER include/spdk/ioat_spec.h 00:07:08.902 TEST_HEADER include/spdk/iscsi_spec.h 00:07:08.902 TEST_HEADER include/spdk/json.h 00:07:08.902 TEST_HEADER include/spdk/jsonrpc.h 00:07:08.902 TEST_HEADER include/spdk/keyring.h 00:07:08.902 TEST_HEADER include/spdk/keyring_module.h 00:07:08.902 TEST_HEADER include/spdk/likely.h 00:07:08.902 TEST_HEADER include/spdk/log.h 00:07:08.902 TEST_HEADER include/spdk/lvol.h 00:07:08.902 TEST_HEADER include/spdk/md5.h 00:07:08.902 TEST_HEADER include/spdk/memory.h 00:07:08.902 TEST_HEADER include/spdk/mmio.h 00:07:08.902 TEST_HEADER include/spdk/nbd.h 00:07:08.902 TEST_HEADER include/spdk/net.h 00:07:08.902 TEST_HEADER include/spdk/notify.h 00:07:08.902 TEST_HEADER include/spdk/nvme.h 00:07:08.902 TEST_HEADER include/spdk/nvme_intel.h 00:07:08.902 TEST_HEADER include/spdk/nvme_ocssd.h 00:07:08.902 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:07:08.902 TEST_HEADER include/spdk/nvme_spec.h 00:07:08.902 TEST_HEADER include/spdk/nvme_zns.h 00:07:08.902 TEST_HEADER include/spdk/nvmf_cmd.h 00:07:08.902 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:07:08.902 TEST_HEADER include/spdk/nvmf.h 00:07:08.902 CC test/dma/test_dma/test_dma.o 00:07:08.902 TEST_HEADER include/spdk/nvmf_spec.h 00:07:08.902 TEST_HEADER include/spdk/nvmf_transport.h 00:07:08.902 TEST_HEADER include/spdk/opal.h 00:07:08.902 TEST_HEADER include/spdk/opal_spec.h 00:07:08.902 TEST_HEADER include/spdk/pci_ids.h 00:07:08.902 TEST_HEADER include/spdk/pipe.h 00:07:08.902 TEST_HEADER include/spdk/queue.h 00:07:08.902 CC test/app/bdev_svc/bdev_svc.o 00:07:08.902 TEST_HEADER include/spdk/reduce.h 00:07:09.160 TEST_HEADER include/spdk/rpc.h 00:07:09.160 CC test/env/mem_callbacks/mem_callbacks.o 00:07:09.160 TEST_HEADER include/spdk/scheduler.h 00:07:09.160 TEST_HEADER include/spdk/scsi.h 00:07:09.160 TEST_HEADER include/spdk/scsi_spec.h 00:07:09.160 TEST_HEADER include/spdk/sock.h 00:07:09.160 TEST_HEADER include/spdk/stdinc.h 00:07:09.160 TEST_HEADER include/spdk/string.h 00:07:09.160 TEST_HEADER include/spdk/thread.h 00:07:09.160 TEST_HEADER include/spdk/trace.h 00:07:09.160 TEST_HEADER include/spdk/trace_parser.h 00:07:09.160 TEST_HEADER include/spdk/tree.h 00:07:09.160 TEST_HEADER include/spdk/ublk.h 00:07:09.160 TEST_HEADER include/spdk/util.h 00:07:09.160 TEST_HEADER include/spdk/uuid.h 00:07:09.160 TEST_HEADER include/spdk/version.h 00:07:09.160 TEST_HEADER include/spdk/vfio_user_pci.h 00:07:09.160 TEST_HEADER include/spdk/vfio_user_spec.h 00:07:09.160 TEST_HEADER include/spdk/vhost.h 00:07:09.160 TEST_HEADER include/spdk/vmd.h 00:07:09.160 TEST_HEADER include/spdk/xor.h 00:07:09.160 TEST_HEADER include/spdk/zipf.h 00:07:09.160 CXX test/cpp_headers/accel.o 00:07:09.160 LINK interrupt_tgt 00:07:09.160 LINK poller_perf 00:07:09.160 LINK zipf 00:07:09.160 LINK spdk_trace_record 00:07:09.160 LINK ioat_perf 00:07:09.160 LINK bdev_svc 00:07:09.160 CXX test/cpp_headers/accel_module.o 00:07:09.160 CXX test/cpp_headers/assert.o 00:07:09.160 CXX test/cpp_headers/barrier.o 00:07:09.419 LINK spdk_trace 00:07:09.419 CC test/rpc_client/rpc_client_test.o 00:07:09.419 CC examples/ioat/verify/verify.o 00:07:09.419 CXX test/cpp_headers/base64.o 00:07:09.419 CC examples/thread/thread/thread_ex.o 00:07:09.419 LINK rpc_client_test 00:07:09.419 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:07:09.419 CC examples/vmd/lsvmd/lsvmd.o 00:07:09.419 LINK test_dma 00:07:09.419 CC examples/sock/hello_world/hello_sock.o 00:07:09.678 LINK mem_callbacks 00:07:09.678 CXX test/cpp_headers/bdev.o 00:07:09.678 LINK verify 00:07:09.678 CC app/nvmf_tgt/nvmf_main.o 00:07:09.678 LINK lsvmd 00:07:09.678 LINK thread 00:07:09.678 CXX test/cpp_headers/bdev_module.o 00:07:09.678 CC test/env/vtophys/vtophys.o 00:07:09.678 CC examples/vmd/led/led.o 00:07:09.937 LINK hello_sock 00:07:09.937 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:07:09.937 LINK nvmf_tgt 00:07:09.937 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:07:09.937 LINK vtophys 00:07:09.937 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:07:09.937 LINK led 00:07:09.937 CXX test/cpp_headers/bdev_zone.o 00:07:09.937 CC app/iscsi_tgt/iscsi_tgt.o 00:07:09.937 CXX test/cpp_headers/bit_array.o 00:07:09.937 LINK nvme_fuzz 00:07:10.195 CXX test/cpp_headers/bit_pool.o 00:07:10.195 CC test/event/event_perf/event_perf.o 00:07:10.195 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:07:10.195 LINK iscsi_tgt 00:07:10.195 CC app/spdk_tgt/spdk_tgt.o 00:07:10.195 CC test/event/reactor/reactor.o 00:07:10.195 CC test/event/reactor_perf/reactor_perf.o 00:07:10.195 CC examples/idxd/perf/perf.o 00:07:10.195 CXX test/cpp_headers/blob_bdev.o 00:07:10.195 LINK env_dpdk_post_init 00:07:10.195 LINK event_perf 00:07:10.195 CXX test/cpp_headers/blobfs_bdev.o 00:07:10.195 LINK reactor_perf 00:07:10.195 LINK reactor 00:07:10.452 LINK vhost_fuzz 00:07:10.452 LINK spdk_tgt 00:07:10.452 CXX test/cpp_headers/blobfs.o 00:07:10.452 CC test/env/memory/memory_ut.o 00:07:10.452 CC test/app/histogram_perf/histogram_perf.o 00:07:10.452 CC test/event/app_repeat/app_repeat.o 00:07:10.452 CXX test/cpp_headers/blob.o 00:07:10.452 LINK idxd_perf 00:07:10.452 CC test/event/scheduler/scheduler.o 00:07:10.709 LINK histogram_perf 00:07:10.709 CC test/nvme/aer/aer.o 00:07:10.709 CC app/spdk_lspci/spdk_lspci.o 00:07:10.709 CC test/nvme/reset/reset.o 00:07:10.709 LINK app_repeat 00:07:10.709 CXX test/cpp_headers/conf.o 00:07:10.709 CXX test/cpp_headers/config.o 00:07:10.709 LINK spdk_lspci 00:07:10.709 LINK scheduler 00:07:10.709 CC app/spdk_nvme_perf/perf.o 00:07:10.709 CXX test/cpp_headers/cpuset.o 00:07:10.966 LINK aer 00:07:10.966 CC examples/fsdev/hello_world/hello_fsdev.o 00:07:10.966 CC test/env/pci/pci_ut.o 00:07:10.966 LINK reset 00:07:10.966 CXX test/cpp_headers/crc16.o 00:07:10.966 CXX test/cpp_headers/crc32.o 00:07:10.966 CXX test/cpp_headers/crc64.o 00:07:10.966 CC app/spdk_nvme_identify/identify.o 00:07:11.223 CC app/spdk_nvme_discover/discovery_aer.o 00:07:11.223 CC test/nvme/sgl/sgl.o 00:07:11.223 CXX test/cpp_headers/dif.o 00:07:11.223 LINK hello_fsdev 00:07:11.223 CC test/accel/dif/dif.o 00:07:11.223 CXX test/cpp_headers/dma.o 00:07:11.223 LINK spdk_nvme_discover 00:07:11.223 LINK pci_ut 00:07:11.223 LINK sgl 00:07:11.480 CXX test/cpp_headers/endian.o 00:07:11.480 CXX test/cpp_headers/env_dpdk.o 00:07:11.480 CC examples/accel/perf/accel_perf.o 00:07:11.480 CC test/nvme/e2edp/nvme_dp.o 00:07:11.480 CC test/nvme/overhead/overhead.o 00:07:11.480 LINK memory_ut 00:07:11.480 CXX test/cpp_headers/env.o 00:07:11.480 LINK iscsi_fuzz 00:07:11.737 CC test/nvme/err_injection/err_injection.o 00:07:11.737 LINK spdk_nvme_perf 00:07:11.737 CXX test/cpp_headers/event.o 00:07:11.737 LINK err_injection 00:07:11.737 CC test/nvme/startup/startup.o 00:07:11.737 CXX test/cpp_headers/fd_group.o 00:07:11.737 LINK spdk_nvme_identify 00:07:11.737 LINK nvme_dp 00:07:11.737 LINK overhead 00:07:11.737 CC test/app/jsoncat/jsoncat.o 00:07:11.994 LINK dif 00:07:11.994 LINK accel_perf 00:07:11.994 CXX test/cpp_headers/fd.o 00:07:11.994 LINK startup 00:07:11.994 CC test/nvme/reserve/reserve.o 00:07:11.994 CC test/blobfs/mkfs/mkfs.o 00:07:11.994 LINK jsoncat 00:07:11.994 CC app/spdk_top/spdk_top.o 00:07:11.994 CXX test/cpp_headers/file.o 00:07:11.994 CC test/nvme/simple_copy/simple_copy.o 00:07:12.251 LINK mkfs 00:07:12.251 CXX test/cpp_headers/fsdev.o 00:07:12.251 LINK reserve 00:07:12.251 CC test/nvme/connect_stress/connect_stress.o 00:07:12.251 CC test/lvol/esnap/esnap.o 00:07:12.251 CC test/app/stub/stub.o 00:07:12.251 CC test/bdev/bdevio/bdevio.o 00:07:12.251 CC examples/blob/hello_world/hello_blob.o 00:07:12.251 CXX test/cpp_headers/fsdev_module.o 00:07:12.251 LINK simple_copy 00:07:12.251 CXX test/cpp_headers/ftl.o 00:07:12.251 LINK connect_stress 00:07:12.519 LINK stub 00:07:12.519 CXX test/cpp_headers/fuse_dispatcher.o 00:07:12.519 CXX test/cpp_headers/gpt_spec.o 00:07:12.519 CC examples/nvme/hello_world/hello_world.o 00:07:12.519 CXX test/cpp_headers/hexlify.o 00:07:12.519 LINK hello_blob 00:07:12.519 CXX test/cpp_headers/histogram_data.o 00:07:12.519 CXX test/cpp_headers/idxd.o 00:07:12.519 CXX test/cpp_headers/idxd_spec.o 00:07:12.519 CXX test/cpp_headers/init.o 00:07:12.519 CC test/nvme/boot_partition/boot_partition.o 00:07:12.519 LINK hello_world 00:07:12.776 CXX test/cpp_headers/ioat.o 00:07:12.776 CXX test/cpp_headers/ioat_spec.o 00:07:12.776 LINK bdevio 00:07:12.776 CXX test/cpp_headers/iscsi_spec.o 00:07:12.776 LINK boot_partition 00:07:12.776 CC examples/blob/cli/blobcli.o 00:07:12.776 CC examples/nvme/reconnect/reconnect.o 00:07:12.776 CXX test/cpp_headers/json.o 00:07:12.776 CC examples/nvme/nvme_manage/nvme_manage.o 00:07:12.776 CC test/nvme/compliance/nvme_compliance.o 00:07:12.776 CC app/vhost/vhost.o 00:07:13.032 CC examples/nvme/arbitration/arbitration.o 00:07:13.032 CXX test/cpp_headers/jsonrpc.o 00:07:13.032 LINK spdk_top 00:07:13.032 CC examples/bdev/hello_world/hello_bdev.o 00:07:13.032 CXX test/cpp_headers/keyring.o 00:07:13.032 LINK vhost 00:07:13.032 CXX test/cpp_headers/keyring_module.o 00:07:13.032 LINK blobcli 00:07:13.032 LINK reconnect 00:07:13.288 LINK arbitration 00:07:13.288 LINK hello_bdev 00:07:13.288 LINK nvme_compliance 00:07:13.288 CXX test/cpp_headers/likely.o 00:07:13.288 CXX test/cpp_headers/log.o 00:07:13.288 CC examples/bdev/bdevperf/bdevperf.o 00:07:13.288 CC app/spdk_dd/spdk_dd.o 00:07:13.288 CC examples/nvme/cmb_copy/cmb_copy.o 00:07:13.288 CC examples/nvme/hotplug/hotplug.o 00:07:13.288 CXX test/cpp_headers/lvol.o 00:07:13.288 LINK nvme_manage 00:07:13.288 CC test/nvme/fused_ordering/fused_ordering.o 00:07:13.545 LINK cmb_copy 00:07:13.545 CXX test/cpp_headers/md5.o 00:07:13.545 CXX test/cpp_headers/memory.o 00:07:13.545 CC app/fio/nvme/fio_plugin.o 00:07:13.545 CC test/nvme/doorbell_aers/doorbell_aers.o 00:07:13.545 LINK fused_ordering 00:07:13.545 LINK hotplug 00:07:13.545 LINK spdk_dd 00:07:13.802 CC app/fio/bdev/fio_plugin.o 00:07:13.802 CXX test/cpp_headers/mmio.o 00:07:13.802 CC examples/nvme/abort/abort.o 00:07:13.802 CC test/nvme/fdp/fdp.o 00:07:13.802 LINK doorbell_aers 00:07:13.802 CC test/nvme/cuse/cuse.o 00:07:13.802 CXX test/cpp_headers/nbd.o 00:07:13.802 CXX test/cpp_headers/net.o 00:07:13.802 CXX test/cpp_headers/notify.o 00:07:13.802 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:07:14.058 CXX test/cpp_headers/nvme.o 00:07:14.059 CXX test/cpp_headers/nvme_intel.o 00:07:14.059 LINK spdk_nvme 00:07:14.059 LINK bdevperf 00:07:14.059 LINK pmr_persistence 00:07:14.059 LINK fdp 00:07:14.059 CXX test/cpp_headers/nvme_ocssd.o 00:07:14.059 LINK abort 00:07:14.059 CXX test/cpp_headers/nvme_ocssd_spec.o 00:07:14.059 CXX test/cpp_headers/nvme_spec.o 00:07:14.316 CXX test/cpp_headers/nvme_zns.o 00:07:14.316 CXX test/cpp_headers/nvmf_cmd.o 00:07:14.316 LINK spdk_bdev 00:07:14.316 CXX test/cpp_headers/nvmf_fc_spec.o 00:07:14.316 CXX test/cpp_headers/nvmf.o 00:07:14.316 CXX test/cpp_headers/nvmf_spec.o 00:07:14.316 CXX test/cpp_headers/nvmf_transport.o 00:07:14.316 CXX test/cpp_headers/opal.o 00:07:14.316 CXX test/cpp_headers/opal_spec.o 00:07:14.316 CXX test/cpp_headers/pci_ids.o 00:07:14.316 CXX test/cpp_headers/pipe.o 00:07:14.316 CXX test/cpp_headers/queue.o 00:07:14.316 CXX test/cpp_headers/reduce.o 00:07:14.316 CC examples/nvmf/nvmf/nvmf.o 00:07:14.316 CXX test/cpp_headers/rpc.o 00:07:14.573 CXX test/cpp_headers/scheduler.o 00:07:14.573 CXX test/cpp_headers/scsi.o 00:07:14.573 CXX test/cpp_headers/scsi_spec.o 00:07:14.573 CXX test/cpp_headers/sock.o 00:07:14.573 CXX test/cpp_headers/stdinc.o 00:07:14.573 CXX test/cpp_headers/string.o 00:07:14.573 CXX test/cpp_headers/thread.o 00:07:14.573 CXX test/cpp_headers/trace.o 00:07:14.573 CXX test/cpp_headers/trace_parser.o 00:07:14.573 CXX test/cpp_headers/tree.o 00:07:14.573 CXX test/cpp_headers/ublk.o 00:07:14.573 CXX test/cpp_headers/util.o 00:07:14.573 CXX test/cpp_headers/uuid.o 00:07:14.573 CXX test/cpp_headers/version.o 00:07:14.573 CXX test/cpp_headers/vfio_user_pci.o 00:07:14.573 CXX test/cpp_headers/vfio_user_spec.o 00:07:14.832 CXX test/cpp_headers/vhost.o 00:07:14.832 LINK nvmf 00:07:14.832 CXX test/cpp_headers/vmd.o 00:07:14.832 CXX test/cpp_headers/xor.o 00:07:14.832 CXX test/cpp_headers/zipf.o 00:07:15.089 LINK cuse 00:07:17.622 LINK esnap 00:07:17.622 00:07:17.622 real 1m17.969s 00:07:17.622 user 7m12.431s 00:07:17.622 sys 1m15.572s 00:07:17.622 06:33:30 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:07:17.622 06:33:30 make -- common/autotest_common.sh@10 -- $ set +x 00:07:17.622 ************************************ 00:07:17.622 END TEST make 00:07:17.622 ************************************ 00:07:17.622 06:33:30 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:07:17.622 06:33:30 -- pm/common@29 -- $ signal_monitor_resources TERM 00:07:17.622 06:33:30 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:07:17.622 06:33:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:17.622 06:33:30 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:07:17.622 06:33:30 -- pm/common@44 -- $ pid=5067 00:07:17.622 06:33:30 -- pm/common@50 -- $ kill -TERM 5067 00:07:17.622 06:33:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:17.622 06:33:30 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:07:17.622 06:33:30 -- pm/common@44 -- $ pid=5068 00:07:17.622 06:33:30 -- pm/common@50 -- $ kill -TERM 5068 00:07:17.622 06:33:30 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:07:17.622 06:33:30 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:07:17.622 06:33:30 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:17.622 06:33:30 -- common/autotest_common.sh@1711 -- # lcov --version 00:07:17.622 06:33:30 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:17.622 06:33:30 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:17.622 06:33:30 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:17.622 06:33:30 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:17.622 06:33:30 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:17.622 06:33:30 -- scripts/common.sh@336 -- # IFS=.-: 00:07:17.622 06:33:30 -- scripts/common.sh@336 -- # read -ra ver1 00:07:17.622 06:33:30 -- scripts/common.sh@337 -- # IFS=.-: 00:07:17.622 06:33:30 -- scripts/common.sh@337 -- # read -ra ver2 00:07:17.622 06:33:30 -- scripts/common.sh@338 -- # local 'op=<' 00:07:17.622 06:33:30 -- scripts/common.sh@340 -- # ver1_l=2 00:07:17.622 06:33:30 -- scripts/common.sh@341 -- # ver2_l=1 00:07:17.622 06:33:30 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:17.622 06:33:30 -- scripts/common.sh@344 -- # case "$op" in 00:07:17.622 06:33:30 -- scripts/common.sh@345 -- # : 1 00:07:17.622 06:33:30 -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:17.622 06:33:30 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:17.622 06:33:30 -- scripts/common.sh@365 -- # decimal 1 00:07:17.622 06:33:30 -- scripts/common.sh@353 -- # local d=1 00:07:17.622 06:33:30 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:17.622 06:33:30 -- scripts/common.sh@355 -- # echo 1 00:07:17.622 06:33:30 -- scripts/common.sh@365 -- # ver1[v]=1 00:07:17.622 06:33:30 -- scripts/common.sh@366 -- # decimal 2 00:07:17.622 06:33:30 -- scripts/common.sh@353 -- # local d=2 00:07:17.622 06:33:30 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:17.622 06:33:30 -- scripts/common.sh@355 -- # echo 2 00:07:17.622 06:33:30 -- scripts/common.sh@366 -- # ver2[v]=2 00:07:17.622 06:33:30 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:17.622 06:33:30 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:17.622 06:33:30 -- scripts/common.sh@368 -- # return 0 00:07:17.622 06:33:30 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:17.622 06:33:30 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:17.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.622 --rc genhtml_branch_coverage=1 00:07:17.622 --rc genhtml_function_coverage=1 00:07:17.622 --rc genhtml_legend=1 00:07:17.622 --rc geninfo_all_blocks=1 00:07:17.622 --rc geninfo_unexecuted_blocks=1 00:07:17.622 00:07:17.622 ' 00:07:17.622 06:33:30 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:17.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.622 --rc genhtml_branch_coverage=1 00:07:17.622 --rc genhtml_function_coverage=1 00:07:17.622 --rc genhtml_legend=1 00:07:17.622 --rc geninfo_all_blocks=1 00:07:17.622 --rc geninfo_unexecuted_blocks=1 00:07:17.622 00:07:17.622 ' 00:07:17.622 06:33:30 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:17.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.623 --rc genhtml_branch_coverage=1 00:07:17.623 --rc genhtml_function_coverage=1 00:07:17.623 --rc genhtml_legend=1 00:07:17.623 --rc geninfo_all_blocks=1 00:07:17.623 --rc geninfo_unexecuted_blocks=1 00:07:17.623 00:07:17.623 ' 00:07:17.623 06:33:30 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:17.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.623 --rc genhtml_branch_coverage=1 00:07:17.623 --rc genhtml_function_coverage=1 00:07:17.623 --rc genhtml_legend=1 00:07:17.623 --rc geninfo_all_blocks=1 00:07:17.623 --rc geninfo_unexecuted_blocks=1 00:07:17.623 00:07:17.623 ' 00:07:17.623 06:33:30 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:17.623 06:33:30 -- nvmf/common.sh@7 -- # uname -s 00:07:17.623 06:33:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:17.623 06:33:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:17.623 06:33:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:17.623 06:33:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:17.623 06:33:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:17.623 06:33:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:17.623 06:33:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:17.623 06:33:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:17.623 06:33:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:17.623 06:33:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:17.623 06:33:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b8a95972-adac-4888-bff5-5983b481f9e9 00:07:17.623 06:33:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=b8a95972-adac-4888-bff5-5983b481f9e9 00:07:17.623 06:33:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:17.623 06:33:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:17.623 06:33:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:17.623 06:33:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:17.623 06:33:30 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:17.623 06:33:30 -- scripts/common.sh@15 -- # shopt -s extglob 00:07:17.623 06:33:30 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:17.623 06:33:30 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:17.623 06:33:30 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:17.623 06:33:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.623 06:33:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.623 06:33:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.623 06:33:30 -- paths/export.sh@5 -- # export PATH 00:07:17.623 06:33:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.623 06:33:30 -- nvmf/common.sh@51 -- # : 0 00:07:17.623 06:33:30 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:17.623 06:33:30 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:17.623 06:33:30 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:17.623 06:33:30 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:17.623 06:33:30 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:17.623 06:33:30 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:17.623 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:17.623 06:33:30 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:17.623 06:33:30 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:17.623 06:33:30 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:17.623 06:33:30 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:07:17.623 06:33:30 -- spdk/autotest.sh@32 -- # uname -s 00:07:17.623 06:33:30 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:07:17.623 06:33:30 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:07:17.623 06:33:30 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:07:17.623 06:33:30 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:07:17.623 06:33:30 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:07:17.623 06:33:30 -- spdk/autotest.sh@44 -- # modprobe nbd 00:07:17.623 06:33:30 -- spdk/autotest.sh@46 -- # type -P udevadm 00:07:17.623 06:33:30 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:07:17.623 06:33:30 -- spdk/autotest.sh@48 -- # udevadm_pid=54344 00:07:17.623 06:33:30 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:07:17.623 06:33:30 -- pm/common@17 -- # local monitor 00:07:17.623 06:33:30 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:17.623 06:33:30 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:17.623 06:33:30 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:07:17.623 06:33:30 -- pm/common@25 -- # sleep 1 00:07:17.623 06:33:30 -- pm/common@21 -- # date +%s 00:07:17.623 06:33:30 -- pm/common@21 -- # date +%s 00:07:17.623 06:33:30 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733466810 00:07:17.623 06:33:30 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733466810 00:07:17.623 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733466810_collect-vmstat.pm.log 00:07:17.623 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733466810_collect-cpu-load.pm.log 00:07:18.559 06:33:31 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:07:18.559 06:33:31 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:07:18.559 06:33:31 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:18.559 06:33:31 -- common/autotest_common.sh@10 -- # set +x 00:07:18.559 06:33:31 -- spdk/autotest.sh@59 -- # create_test_list 00:07:18.559 06:33:31 -- common/autotest_common.sh@752 -- # xtrace_disable 00:07:18.559 06:33:31 -- common/autotest_common.sh@10 -- # set +x 00:07:18.816 06:33:31 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:07:18.816 06:33:31 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:07:18.816 06:33:31 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:07:18.816 06:33:31 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:07:18.816 06:33:31 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:07:18.816 06:33:31 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:07:18.816 06:33:31 -- common/autotest_common.sh@1457 -- # uname 00:07:18.816 06:33:31 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:07:18.816 06:33:31 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:07:18.816 06:33:31 -- common/autotest_common.sh@1477 -- # uname 00:07:18.816 06:33:31 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:07:18.817 06:33:31 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:07:18.817 06:33:31 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:07:18.817 lcov: LCOV version 1.15 00:07:18.817 06:33:31 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:07:33.684 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:07:33.684 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:07:48.575 06:33:59 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:07:48.575 06:33:59 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:48.575 06:33:59 -- common/autotest_common.sh@10 -- # set +x 00:07:48.575 06:33:59 -- spdk/autotest.sh@78 -- # rm -f 00:07:48.575 06:33:59 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:48.575 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:48.575 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:07:48.575 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:07:48.575 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:07:48.575 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:07:48.575 06:34:00 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:07:48.575 06:34:00 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:07:48.575 06:34:00 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:07:48.575 06:34:00 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:07:48.575 06:34:00 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:07:48.575 06:34:00 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:07:48.575 06:34:00 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:07:48.575 06:34:00 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:07:48.575 06:34:00 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:48.575 06:34:00 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:07:48.575 06:34:00 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:07:48.575 06:34:00 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:48.576 06:34:00 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:48.576 06:34:00 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:07:48.576 06:34:00 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:07:48.576 06:34:00 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:48.576 06:34:00 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:07:48.576 06:34:00 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:07:48.576 06:34:00 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:48.576 06:34:00 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:48.576 06:34:00 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:07:48.576 06:34:00 -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:07:48.576 06:34:00 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:48.576 06:34:00 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:07:48.576 06:34:00 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:07:48.576 06:34:00 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:48.576 06:34:00 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:48.576 06:34:00 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:48.576 06:34:00 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:07:48.576 06:34:00 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:07:48.576 06:34:00 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:48.576 06:34:00 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:48.576 06:34:00 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:48.576 06:34:00 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:07:48.576 06:34:00 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:07:48.576 06:34:00 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:48.576 06:34:00 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:48.576 06:34:00 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:07:48.576 06:34:00 -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:07:48.576 06:34:00 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:48.576 06:34:00 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:07:48.576 06:34:00 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:07:48.576 06:34:00 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:48.576 06:34:00 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:48.576 06:34:00 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:07:48.576 06:34:00 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:48.576 06:34:00 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:48.576 06:34:00 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:07:48.576 06:34:00 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:07:48.576 06:34:00 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:07:48.576 No valid GPT data, bailing 00:07:48.576 06:34:00 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:07:48.576 06:34:00 -- scripts/common.sh@394 -- # pt= 00:07:48.576 06:34:00 -- scripts/common.sh@395 -- # return 1 00:07:48.576 06:34:00 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:07:48.576 1+0 records in 00:07:48.576 1+0 records out 00:07:48.576 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0108583 s, 96.6 MB/s 00:07:48.576 06:34:00 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:48.576 06:34:00 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:48.576 06:34:00 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:07:48.576 06:34:00 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:07:48.576 06:34:00 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:07:48.576 No valid GPT data, bailing 00:07:48.576 06:34:00 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:07:48.576 06:34:00 -- scripts/common.sh@394 -- # pt= 00:07:48.576 06:34:00 -- scripts/common.sh@395 -- # return 1 00:07:48.576 06:34:00 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:07:48.576 1+0 records in 00:07:48.576 1+0 records out 00:07:48.576 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00306913 s, 342 MB/s 00:07:48.576 06:34:00 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:48.576 06:34:00 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:48.576 06:34:00 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:07:48.576 06:34:00 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:07:48.576 06:34:00 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:07:48.576 No valid GPT data, bailing 00:07:48.576 06:34:00 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:07:48.576 06:34:00 -- scripts/common.sh@394 -- # pt= 00:07:48.576 06:34:00 -- scripts/common.sh@395 -- # return 1 00:07:48.576 06:34:00 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:07:48.576 1+0 records in 00:07:48.576 1+0 records out 00:07:48.576 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00337931 s, 310 MB/s 00:07:48.576 06:34:00 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:48.576 06:34:00 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:48.576 06:34:00 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:07:48.576 06:34:00 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:07:48.576 06:34:00 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:07:48.576 No valid GPT data, bailing 00:07:48.576 06:34:00 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:07:48.576 06:34:00 -- scripts/common.sh@394 -- # pt= 00:07:48.576 06:34:00 -- scripts/common.sh@395 -- # return 1 00:07:48.576 06:34:00 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:07:48.576 1+0 records in 00:07:48.576 1+0 records out 00:07:48.576 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00457521 s, 229 MB/s 00:07:48.576 06:34:00 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:48.576 06:34:00 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:48.576 06:34:00 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:07:48.576 06:34:00 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:07:48.576 06:34:00 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:07:48.576 No valid GPT data, bailing 00:07:48.576 06:34:00 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:07:48.576 06:34:00 -- scripts/common.sh@394 -- # pt= 00:07:48.576 06:34:00 -- scripts/common.sh@395 -- # return 1 00:07:48.576 06:34:00 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:07:48.576 1+0 records in 00:07:48.576 1+0 records out 00:07:48.576 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00283044 s, 370 MB/s 00:07:48.576 06:34:00 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:48.576 06:34:00 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:48.576 06:34:00 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:07:48.576 06:34:00 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:07:48.576 06:34:00 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:07:48.576 No valid GPT data, bailing 00:07:48.576 06:34:00 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:07:48.576 06:34:00 -- scripts/common.sh@394 -- # pt= 00:07:48.576 06:34:00 -- scripts/common.sh@395 -- # return 1 00:07:48.576 06:34:00 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:07:48.576 1+0 records in 00:07:48.576 1+0 records out 00:07:48.576 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00285703 s, 367 MB/s 00:07:48.576 06:34:00 -- spdk/autotest.sh@105 -- # sync 00:07:48.576 06:34:00 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:07:48.576 06:34:00 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:07:48.576 06:34:00 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:07:49.508 06:34:02 -- spdk/autotest.sh@111 -- # uname -s 00:07:49.508 06:34:02 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:07:49.508 06:34:02 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:07:49.508 06:34:02 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:07:50.074 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:50.331 Hugepages 00:07:50.331 node hugesize free / total 00:07:50.331 node0 1048576kB 0 / 0 00:07:50.331 node0 2048kB 0 / 0 00:07:50.331 00:07:50.331 Type BDF Vendor Device NUMA Driver Device Block devices 00:07:50.331 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:07:50.590 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:07:50.590 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:07:50.590 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:07:50.590 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:07:50.590 06:34:03 -- spdk/autotest.sh@117 -- # uname -s 00:07:50.590 06:34:03 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:07:50.590 06:34:03 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:07:50.590 06:34:03 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:51.157 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:51.415 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:51.415 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:51.415 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:51.672 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:51.672 06:34:04 -- common/autotest_common.sh@1517 -- # sleep 1 00:07:52.650 06:34:05 -- common/autotest_common.sh@1518 -- # bdfs=() 00:07:52.650 06:34:05 -- common/autotest_common.sh@1518 -- # local bdfs 00:07:52.650 06:34:05 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:07:52.650 06:34:05 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:07:52.650 06:34:05 -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:52.650 06:34:05 -- common/autotest_common.sh@1498 -- # local bdfs 00:07:52.650 06:34:05 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:52.650 06:34:05 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:52.650 06:34:05 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:52.650 06:34:05 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:52.650 06:34:05 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:52.650 06:34:05 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:52.908 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:53.165 Waiting for block devices as requested 00:07:53.165 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:53.165 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:53.165 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:53.165 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:58.419 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:58.419 06:34:10 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:07:58.419 06:34:10 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:07:58.419 06:34:10 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:07:58.419 06:34:10 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:07:58.419 06:34:10 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:07:58.420 06:34:10 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:07:58.420 06:34:10 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:07:58.420 06:34:10 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:07:58.420 06:34:10 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:07:58.420 06:34:10 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:07:58.420 06:34:10 -- common/autotest_common.sh@1531 -- # grep oacs 00:07:58.420 06:34:10 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:07:58.420 06:34:10 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:07:58.420 06:34:10 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:07:58.420 06:34:10 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:07:58.420 06:34:10 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:07:58.420 06:34:10 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:07:58.420 06:34:10 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:07:58.420 06:34:10 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:07:58.420 06:34:10 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:07:58.420 06:34:10 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:07:58.420 06:34:10 -- common/autotest_common.sh@1543 -- # continue 00:07:58.420 06:34:10 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:07:58.420 06:34:10 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:07:58.420 06:34:10 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:07:58.420 06:34:10 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:07:58.420 06:34:10 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:07:58.420 06:34:10 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:07:58.420 06:34:10 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:07:58.420 06:34:10 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:07:58.420 06:34:10 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:07:58.420 06:34:10 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:07:58.420 06:34:10 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:07:58.420 06:34:10 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:07:58.420 06:34:10 -- common/autotest_common.sh@1531 -- # grep oacs 00:07:58.420 06:34:10 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:07:58.420 06:34:10 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:07:58.420 06:34:10 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:07:58.420 06:34:10 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:07:58.420 06:34:10 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:07:58.420 06:34:10 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:07:58.420 06:34:10 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:07:58.420 06:34:10 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:07:58.420 06:34:10 -- common/autotest_common.sh@1543 -- # continue 00:07:58.420 06:34:10 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:07:58.420 06:34:10 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:07:58.420 06:34:10 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:07:58.420 06:34:10 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:07:58.420 06:34:10 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:07:58.420 06:34:11 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:07:58.420 06:34:11 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:07:58.420 06:34:11 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:07:58.420 06:34:11 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:07:58.420 06:34:11 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:07:58.420 06:34:11 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:07:58.420 06:34:11 -- common/autotest_common.sh@1531 -- # grep oacs 00:07:58.420 06:34:11 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:07:58.420 06:34:11 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:07:58.420 06:34:11 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:07:58.420 06:34:11 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:07:58.420 06:34:11 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:07:58.420 06:34:11 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:07:58.420 06:34:11 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:07:58.420 06:34:11 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:07:58.420 06:34:11 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:07:58.420 06:34:11 -- common/autotest_common.sh@1543 -- # continue 00:07:58.420 06:34:11 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:07:58.420 06:34:11 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:07:58.420 06:34:11 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:07:58.420 06:34:11 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:07:58.420 06:34:11 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:07:58.420 06:34:11 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:07:58.420 06:34:11 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:07:58.420 06:34:11 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:07:58.420 06:34:11 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:07:58.420 06:34:11 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:07:58.420 06:34:11 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:07:58.420 06:34:11 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:07:58.420 06:34:11 -- common/autotest_common.sh@1531 -- # grep oacs 00:07:58.420 06:34:11 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:07:58.420 06:34:11 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:07:58.420 06:34:11 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:07:58.420 06:34:11 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:07:58.420 06:34:11 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:07:58.420 06:34:11 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:07:58.420 06:34:11 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:07:58.420 06:34:11 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:07:58.420 06:34:11 -- common/autotest_common.sh@1543 -- # continue 00:07:58.420 06:34:11 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:07:58.420 06:34:11 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:58.420 06:34:11 -- common/autotest_common.sh@10 -- # set +x 00:07:58.420 06:34:11 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:07:58.420 06:34:11 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:58.420 06:34:11 -- common/autotest_common.sh@10 -- # set +x 00:07:58.420 06:34:11 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:59.009 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:59.267 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:59.267 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:59.267 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:59.267 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:59.526 06:34:12 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:07:59.526 06:34:12 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:59.526 06:34:12 -- common/autotest_common.sh@10 -- # set +x 00:07:59.526 06:34:12 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:07:59.526 06:34:12 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:07:59.526 06:34:12 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:07:59.526 06:34:12 -- common/autotest_common.sh@1563 -- # bdfs=() 00:07:59.526 06:34:12 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:07:59.526 06:34:12 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:07:59.526 06:34:12 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:07:59.526 06:34:12 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:07:59.526 06:34:12 -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:59.526 06:34:12 -- common/autotest_common.sh@1498 -- # local bdfs 00:07:59.526 06:34:12 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:59.526 06:34:12 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:59.526 06:34:12 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:59.526 06:34:12 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:59.526 06:34:12 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:59.526 06:34:12 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:07:59.526 06:34:12 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:07:59.526 06:34:12 -- common/autotest_common.sh@1566 -- # device=0x0010 00:07:59.526 06:34:12 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:59.526 06:34:12 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:07:59.526 06:34:12 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:07:59.526 06:34:12 -- common/autotest_common.sh@1566 -- # device=0x0010 00:07:59.526 06:34:12 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:59.526 06:34:12 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:07:59.526 06:34:12 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:07:59.526 06:34:12 -- common/autotest_common.sh@1566 -- # device=0x0010 00:07:59.526 06:34:12 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:59.526 06:34:12 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:07:59.526 06:34:12 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:07:59.526 06:34:12 -- common/autotest_common.sh@1566 -- # device=0x0010 00:07:59.526 06:34:12 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:59.526 06:34:12 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:07:59.526 06:34:12 -- common/autotest_common.sh@1572 -- # return 0 00:07:59.526 06:34:12 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:07:59.526 06:34:12 -- common/autotest_common.sh@1580 -- # return 0 00:07:59.526 06:34:12 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:07:59.526 06:34:12 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:07:59.526 06:34:12 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:07:59.526 06:34:12 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:07:59.526 06:34:12 -- spdk/autotest.sh@149 -- # timing_enter lib 00:07:59.527 06:34:12 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:59.527 06:34:12 -- common/autotest_common.sh@10 -- # set +x 00:07:59.527 06:34:12 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:07:59.527 06:34:12 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:59.527 06:34:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:59.527 06:34:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:59.527 06:34:12 -- common/autotest_common.sh@10 -- # set +x 00:07:59.527 ************************************ 00:07:59.527 START TEST env 00:07:59.527 ************************************ 00:07:59.527 06:34:12 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:59.527 * Looking for test storage... 00:07:59.527 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:07:59.527 06:34:12 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:59.527 06:34:12 env -- common/autotest_common.sh@1711 -- # lcov --version 00:07:59.527 06:34:12 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:59.785 06:34:12 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:59.785 06:34:12 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:59.785 06:34:12 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:59.785 06:34:12 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:59.785 06:34:12 env -- scripts/common.sh@336 -- # IFS=.-: 00:07:59.785 06:34:12 env -- scripts/common.sh@336 -- # read -ra ver1 00:07:59.785 06:34:12 env -- scripts/common.sh@337 -- # IFS=.-: 00:07:59.785 06:34:12 env -- scripts/common.sh@337 -- # read -ra ver2 00:07:59.785 06:34:12 env -- scripts/common.sh@338 -- # local 'op=<' 00:07:59.785 06:34:12 env -- scripts/common.sh@340 -- # ver1_l=2 00:07:59.785 06:34:12 env -- scripts/common.sh@341 -- # ver2_l=1 00:07:59.785 06:34:12 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:59.785 06:34:12 env -- scripts/common.sh@344 -- # case "$op" in 00:07:59.785 06:34:12 env -- scripts/common.sh@345 -- # : 1 00:07:59.785 06:34:12 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:59.785 06:34:12 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:59.785 06:34:12 env -- scripts/common.sh@365 -- # decimal 1 00:07:59.785 06:34:12 env -- scripts/common.sh@353 -- # local d=1 00:07:59.785 06:34:12 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:59.785 06:34:12 env -- scripts/common.sh@355 -- # echo 1 00:07:59.785 06:34:12 env -- scripts/common.sh@365 -- # ver1[v]=1 00:07:59.785 06:34:12 env -- scripts/common.sh@366 -- # decimal 2 00:07:59.785 06:34:12 env -- scripts/common.sh@353 -- # local d=2 00:07:59.785 06:34:12 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:59.785 06:34:12 env -- scripts/common.sh@355 -- # echo 2 00:07:59.785 06:34:12 env -- scripts/common.sh@366 -- # ver2[v]=2 00:07:59.785 06:34:12 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:59.785 06:34:12 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:59.785 06:34:12 env -- scripts/common.sh@368 -- # return 0 00:07:59.785 06:34:12 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:59.785 06:34:12 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:59.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.785 --rc genhtml_branch_coverage=1 00:07:59.785 --rc genhtml_function_coverage=1 00:07:59.785 --rc genhtml_legend=1 00:07:59.785 --rc geninfo_all_blocks=1 00:07:59.785 --rc geninfo_unexecuted_blocks=1 00:07:59.785 00:07:59.785 ' 00:07:59.785 06:34:12 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:59.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.785 --rc genhtml_branch_coverage=1 00:07:59.785 --rc genhtml_function_coverage=1 00:07:59.785 --rc genhtml_legend=1 00:07:59.785 --rc geninfo_all_blocks=1 00:07:59.785 --rc geninfo_unexecuted_blocks=1 00:07:59.785 00:07:59.785 ' 00:07:59.785 06:34:12 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:59.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.785 --rc genhtml_branch_coverage=1 00:07:59.785 --rc genhtml_function_coverage=1 00:07:59.785 --rc genhtml_legend=1 00:07:59.785 --rc geninfo_all_blocks=1 00:07:59.785 --rc geninfo_unexecuted_blocks=1 00:07:59.785 00:07:59.785 ' 00:07:59.785 06:34:12 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:59.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.785 --rc genhtml_branch_coverage=1 00:07:59.785 --rc genhtml_function_coverage=1 00:07:59.785 --rc genhtml_legend=1 00:07:59.785 --rc geninfo_all_blocks=1 00:07:59.785 --rc geninfo_unexecuted_blocks=1 00:07:59.785 00:07:59.785 ' 00:07:59.785 06:34:12 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:59.785 06:34:12 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:59.785 06:34:12 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:59.785 06:34:12 env -- common/autotest_common.sh@10 -- # set +x 00:07:59.785 ************************************ 00:07:59.785 START TEST env_memory 00:07:59.785 ************************************ 00:07:59.785 06:34:12 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:59.785 00:07:59.785 00:07:59.785 CUnit - A unit testing framework for C - Version 2.1-3 00:07:59.785 http://cunit.sourceforge.net/ 00:07:59.785 00:07:59.785 00:07:59.785 Suite: memory 00:07:59.785 Test: alloc and free memory map ...[2024-12-06 06:34:12.359591] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:07:59.785 passed 00:07:59.785 Test: mem map translation ...[2024-12-06 06:34:12.398352] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:07:59.785 [2024-12-06 06:34:12.398412] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:07:59.785 [2024-12-06 06:34:12.398480] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:07:59.785 [2024-12-06 06:34:12.398496] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:07:59.785 passed 00:07:59.785 Test: mem map registration ...[2024-12-06 06:34:12.466500] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:07:59.785 [2024-12-06 06:34:12.466553] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:07:59.785 passed 00:08:00.044 Test: mem map adjacent registrations ...passed 00:08:00.044 00:08:00.044 Run Summary: Type Total Ran Passed Failed Inactive 00:08:00.044 suites 1 1 n/a 0 0 00:08:00.044 tests 4 4 4 0 0 00:08:00.044 asserts 152 152 152 0 n/a 00:08:00.044 00:08:00.044 Elapsed time = 0.233 seconds 00:08:00.044 00:08:00.044 real 0m0.266s 00:08:00.044 user 0m0.242s 00:08:00.044 sys 0m0.018s 00:08:00.044 06:34:12 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:00.044 ************************************ 00:08:00.044 END TEST env_memory 00:08:00.044 06:34:12 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:08:00.044 ************************************ 00:08:00.044 06:34:12 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:08:00.044 06:34:12 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:00.044 06:34:12 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:00.044 06:34:12 env -- common/autotest_common.sh@10 -- # set +x 00:08:00.044 ************************************ 00:08:00.044 START TEST env_vtophys 00:08:00.044 ************************************ 00:08:00.044 06:34:12 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:08:00.044 EAL: lib.eal log level changed from notice to debug 00:08:00.044 EAL: Detected lcore 0 as core 0 on socket 0 00:08:00.044 EAL: Detected lcore 1 as core 0 on socket 0 00:08:00.044 EAL: Detected lcore 2 as core 0 on socket 0 00:08:00.044 EAL: Detected lcore 3 as core 0 on socket 0 00:08:00.044 EAL: Detected lcore 4 as core 0 on socket 0 00:08:00.044 EAL: Detected lcore 5 as core 0 on socket 0 00:08:00.044 EAL: Detected lcore 6 as core 0 on socket 0 00:08:00.044 EAL: Detected lcore 7 as core 0 on socket 0 00:08:00.044 EAL: Detected lcore 8 as core 0 on socket 0 00:08:00.044 EAL: Detected lcore 9 as core 0 on socket 0 00:08:00.044 EAL: Maximum logical cores by configuration: 128 00:08:00.044 EAL: Detected CPU lcores: 10 00:08:00.044 EAL: Detected NUMA nodes: 1 00:08:00.044 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:08:00.044 EAL: Detected shared linkage of DPDK 00:08:00.044 EAL: No shared files mode enabled, IPC will be disabled 00:08:00.044 EAL: Selected IOVA mode 'PA' 00:08:00.044 EAL: Probing VFIO support... 00:08:00.044 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:08:00.044 EAL: VFIO modules not loaded, skipping VFIO support... 00:08:00.044 EAL: Ask a virtual area of 0x2e000 bytes 00:08:00.044 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:08:00.044 EAL: Setting up physically contiguous memory... 00:08:00.044 EAL: Setting maximum number of open files to 524288 00:08:00.044 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:08:00.044 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:08:00.044 EAL: Ask a virtual area of 0x61000 bytes 00:08:00.044 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:08:00.044 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:00.044 EAL: Ask a virtual area of 0x400000000 bytes 00:08:00.044 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:08:00.044 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:08:00.044 EAL: Ask a virtual area of 0x61000 bytes 00:08:00.044 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:08:00.044 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:00.044 EAL: Ask a virtual area of 0x400000000 bytes 00:08:00.044 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:08:00.044 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:08:00.044 EAL: Ask a virtual area of 0x61000 bytes 00:08:00.044 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:08:00.044 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:00.044 EAL: Ask a virtual area of 0x400000000 bytes 00:08:00.044 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:08:00.044 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:08:00.044 EAL: Ask a virtual area of 0x61000 bytes 00:08:00.044 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:08:00.044 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:00.044 EAL: Ask a virtual area of 0x400000000 bytes 00:08:00.044 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:08:00.044 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:08:00.044 EAL: Hugepages will be freed exactly as allocated. 00:08:00.044 EAL: No shared files mode enabled, IPC is disabled 00:08:00.044 EAL: No shared files mode enabled, IPC is disabled 00:08:00.044 EAL: TSC frequency is ~2600000 KHz 00:08:00.044 EAL: Main lcore 0 is ready (tid=7fed2f4d9a40;cpuset=[0]) 00:08:00.044 EAL: Trying to obtain current memory policy. 00:08:00.044 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:00.044 EAL: Restoring previous memory policy: 0 00:08:00.044 EAL: request: mp_malloc_sync 00:08:00.044 EAL: No shared files mode enabled, IPC is disabled 00:08:00.044 EAL: Heap on socket 0 was expanded by 2MB 00:08:00.044 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:08:00.302 EAL: No PCI address specified using 'addr=' in: bus=pci 00:08:00.302 EAL: Mem event callback 'spdk:(nil)' registered 00:08:00.302 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:08:00.302 00:08:00.302 00:08:00.302 CUnit - A unit testing framework for C - Version 2.1-3 00:08:00.302 http://cunit.sourceforge.net/ 00:08:00.302 00:08:00.302 00:08:00.302 Suite: components_suite 00:08:00.560 Test: vtophys_malloc_test ...passed 00:08:00.560 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:08:00.560 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:00.560 EAL: Restoring previous memory policy: 4 00:08:00.560 EAL: Calling mem event callback 'spdk:(nil)' 00:08:00.560 EAL: request: mp_malloc_sync 00:08:00.560 EAL: No shared files mode enabled, IPC is disabled 00:08:00.560 EAL: Heap on socket 0 was expanded by 4MB 00:08:00.560 EAL: Calling mem event callback 'spdk:(nil)' 00:08:00.560 EAL: request: mp_malloc_sync 00:08:00.560 EAL: No shared files mode enabled, IPC is disabled 00:08:00.560 EAL: Heap on socket 0 was shrunk by 4MB 00:08:00.560 EAL: Trying to obtain current memory policy. 00:08:00.560 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:00.560 EAL: Restoring previous memory policy: 4 00:08:00.560 EAL: Calling mem event callback 'spdk:(nil)' 00:08:00.560 EAL: request: mp_malloc_sync 00:08:00.560 EAL: No shared files mode enabled, IPC is disabled 00:08:00.560 EAL: Heap on socket 0 was expanded by 6MB 00:08:00.560 EAL: Calling mem event callback 'spdk:(nil)' 00:08:00.560 EAL: request: mp_malloc_sync 00:08:00.560 EAL: No shared files mode enabled, IPC is disabled 00:08:00.560 EAL: Heap on socket 0 was shrunk by 6MB 00:08:00.560 EAL: Trying to obtain current memory policy. 00:08:00.560 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:00.560 EAL: Restoring previous memory policy: 4 00:08:00.560 EAL: Calling mem event callback 'spdk:(nil)' 00:08:00.560 EAL: request: mp_malloc_sync 00:08:00.560 EAL: No shared files mode enabled, IPC is disabled 00:08:00.560 EAL: Heap on socket 0 was expanded by 10MB 00:08:00.560 EAL: Calling mem event callback 'spdk:(nil)' 00:08:00.560 EAL: request: mp_malloc_sync 00:08:00.560 EAL: No shared files mode enabled, IPC is disabled 00:08:00.560 EAL: Heap on socket 0 was shrunk by 10MB 00:08:00.560 EAL: Trying to obtain current memory policy. 00:08:00.560 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:00.560 EAL: Restoring previous memory policy: 4 00:08:00.560 EAL: Calling mem event callback 'spdk:(nil)' 00:08:00.560 EAL: request: mp_malloc_sync 00:08:00.560 EAL: No shared files mode enabled, IPC is disabled 00:08:00.560 EAL: Heap on socket 0 was expanded by 18MB 00:08:00.560 EAL: Calling mem event callback 'spdk:(nil)' 00:08:00.560 EAL: request: mp_malloc_sync 00:08:00.560 EAL: No shared files mode enabled, IPC is disabled 00:08:00.560 EAL: Heap on socket 0 was shrunk by 18MB 00:08:00.560 EAL: Trying to obtain current memory policy. 00:08:00.560 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:00.560 EAL: Restoring previous memory policy: 4 00:08:00.560 EAL: Calling mem event callback 'spdk:(nil)' 00:08:00.560 EAL: request: mp_malloc_sync 00:08:00.560 EAL: No shared files mode enabled, IPC is disabled 00:08:00.560 EAL: Heap on socket 0 was expanded by 34MB 00:08:00.560 EAL: Calling mem event callback 'spdk:(nil)' 00:08:00.560 EAL: request: mp_malloc_sync 00:08:00.560 EAL: No shared files mode enabled, IPC is disabled 00:08:00.560 EAL: Heap on socket 0 was shrunk by 34MB 00:08:00.560 EAL: Trying to obtain current memory policy. 00:08:00.560 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:00.560 EAL: Restoring previous memory policy: 4 00:08:00.560 EAL: Calling mem event callback 'spdk:(nil)' 00:08:00.560 EAL: request: mp_malloc_sync 00:08:00.560 EAL: No shared files mode enabled, IPC is disabled 00:08:00.560 EAL: Heap on socket 0 was expanded by 66MB 00:08:00.819 EAL: Calling mem event callback 'spdk:(nil)' 00:08:00.819 EAL: request: mp_malloc_sync 00:08:00.819 EAL: No shared files mode enabled, IPC is disabled 00:08:00.819 EAL: Heap on socket 0 was shrunk by 66MB 00:08:00.819 EAL: Trying to obtain current memory policy. 00:08:00.819 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:00.819 EAL: Restoring previous memory policy: 4 00:08:00.819 EAL: Calling mem event callback 'spdk:(nil)' 00:08:00.819 EAL: request: mp_malloc_sync 00:08:00.819 EAL: No shared files mode enabled, IPC is disabled 00:08:00.819 EAL: Heap on socket 0 was expanded by 130MB 00:08:01.084 EAL: Calling mem event callback 'spdk:(nil)' 00:08:01.084 EAL: request: mp_malloc_sync 00:08:01.084 EAL: No shared files mode enabled, IPC is disabled 00:08:01.084 EAL: Heap on socket 0 was shrunk by 130MB 00:08:01.084 EAL: Trying to obtain current memory policy. 00:08:01.084 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:01.084 EAL: Restoring previous memory policy: 4 00:08:01.084 EAL: Calling mem event callback 'spdk:(nil)' 00:08:01.084 EAL: request: mp_malloc_sync 00:08:01.084 EAL: No shared files mode enabled, IPC is disabled 00:08:01.084 EAL: Heap on socket 0 was expanded by 258MB 00:08:01.341 EAL: Calling mem event callback 'spdk:(nil)' 00:08:01.341 EAL: request: mp_malloc_sync 00:08:01.341 EAL: No shared files mode enabled, IPC is disabled 00:08:01.341 EAL: Heap on socket 0 was shrunk by 258MB 00:08:01.599 EAL: Trying to obtain current memory policy. 00:08:01.599 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:01.858 EAL: Restoring previous memory policy: 4 00:08:01.858 EAL: Calling mem event callback 'spdk:(nil)' 00:08:01.858 EAL: request: mp_malloc_sync 00:08:01.858 EAL: No shared files mode enabled, IPC is disabled 00:08:01.858 EAL: Heap on socket 0 was expanded by 514MB 00:08:02.424 EAL: Calling mem event callback 'spdk:(nil)' 00:08:02.424 EAL: request: mp_malloc_sync 00:08:02.424 EAL: No shared files mode enabled, IPC is disabled 00:08:02.424 EAL: Heap on socket 0 was shrunk by 514MB 00:08:02.989 EAL: Trying to obtain current memory policy. 00:08:02.989 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:02.989 EAL: Restoring previous memory policy: 4 00:08:02.989 EAL: Calling mem event callback 'spdk:(nil)' 00:08:02.989 EAL: request: mp_malloc_sync 00:08:02.989 EAL: No shared files mode enabled, IPC is disabled 00:08:02.989 EAL: Heap on socket 0 was expanded by 1026MB 00:08:04.361 EAL: Calling mem event callback 'spdk:(nil)' 00:08:04.361 EAL: request: mp_malloc_sync 00:08:04.361 EAL: No shared files mode enabled, IPC is disabled 00:08:04.361 EAL: Heap on socket 0 was shrunk by 1026MB 00:08:05.297 passed 00:08:05.297 00:08:05.297 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.297 suites 1 1 n/a 0 0 00:08:05.297 tests 2 2 2 0 0 00:08:05.297 asserts 5803 5803 5803 0 n/a 00:08:05.297 00:08:05.297 Elapsed time = 5.003 seconds 00:08:05.297 EAL: Calling mem event callback 'spdk:(nil)' 00:08:05.297 EAL: request: mp_malloc_sync 00:08:05.297 EAL: No shared files mode enabled, IPC is disabled 00:08:05.297 EAL: Heap on socket 0 was shrunk by 2MB 00:08:05.297 EAL: No shared files mode enabled, IPC is disabled 00:08:05.297 EAL: No shared files mode enabled, IPC is disabled 00:08:05.297 EAL: No shared files mode enabled, IPC is disabled 00:08:05.297 00:08:05.297 real 0m5.265s 00:08:05.297 user 0m4.473s 00:08:05.297 sys 0m0.641s 00:08:05.297 06:34:17 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:05.297 06:34:17 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:08:05.297 ************************************ 00:08:05.297 END TEST env_vtophys 00:08:05.297 ************************************ 00:08:05.297 06:34:17 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:08:05.297 06:34:17 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:05.297 06:34:17 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:05.297 06:34:17 env -- common/autotest_common.sh@10 -- # set +x 00:08:05.297 ************************************ 00:08:05.297 START TEST env_pci 00:08:05.297 ************************************ 00:08:05.297 06:34:17 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:08:05.297 00:08:05.297 00:08:05.297 CUnit - A unit testing framework for C - Version 2.1-3 00:08:05.297 http://cunit.sourceforge.net/ 00:08:05.297 00:08:05.297 00:08:05.297 Suite: pci 00:08:05.297 Test: pci_hook ...[2024-12-06 06:34:17.947109] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57104 has claimed it 00:08:05.297 passed 00:08:05.297 00:08:05.297 EAL: Cannot find device (10000:00:01.0) 00:08:05.297 EAL: Failed to attach device on primary process 00:08:05.297 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.297 suites 1 1 n/a 0 0 00:08:05.297 tests 1 1 1 0 0 00:08:05.297 asserts 25 25 25 0 n/a 00:08:05.297 00:08:05.297 Elapsed time = 0.007 seconds 00:08:05.297 00:08:05.297 real 0m0.063s 00:08:05.297 user 0m0.036s 00:08:05.297 sys 0m0.027s 00:08:05.297 06:34:17 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:05.297 ************************************ 00:08:05.297 END TEST env_pci 00:08:05.297 ************************************ 00:08:05.297 06:34:17 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:08:05.297 06:34:18 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:08:05.297 06:34:18 env -- env/env.sh@15 -- # uname 00:08:05.297 06:34:18 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:08:05.297 06:34:18 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:08:05.297 06:34:18 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:05.297 06:34:18 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:05.297 06:34:18 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:05.297 06:34:18 env -- common/autotest_common.sh@10 -- # set +x 00:08:05.556 ************************************ 00:08:05.556 START TEST env_dpdk_post_init 00:08:05.556 ************************************ 00:08:05.556 06:34:18 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:05.556 EAL: Detected CPU lcores: 10 00:08:05.556 EAL: Detected NUMA nodes: 1 00:08:05.556 EAL: Detected shared linkage of DPDK 00:08:05.556 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:05.556 EAL: Selected IOVA mode 'PA' 00:08:05.556 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:05.556 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:08:05.556 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:08:05.556 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:08:05.556 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:08:05.556 Starting DPDK initialization... 00:08:05.556 Starting SPDK post initialization... 00:08:05.556 SPDK NVMe probe 00:08:05.556 Attaching to 0000:00:10.0 00:08:05.557 Attaching to 0000:00:11.0 00:08:05.557 Attaching to 0000:00:12.0 00:08:05.557 Attaching to 0000:00:13.0 00:08:05.557 Attached to 0000:00:10.0 00:08:05.557 Attached to 0000:00:11.0 00:08:05.557 Attached to 0000:00:13.0 00:08:05.557 Attached to 0000:00:12.0 00:08:05.557 Cleaning up... 00:08:05.557 00:08:05.557 real 0m0.240s 00:08:05.557 user 0m0.070s 00:08:05.557 sys 0m0.072s 00:08:05.557 06:34:18 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:05.557 06:34:18 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:08:05.557 ************************************ 00:08:05.557 END TEST env_dpdk_post_init 00:08:05.557 ************************************ 00:08:05.814 06:34:18 env -- env/env.sh@26 -- # uname 00:08:05.814 06:34:18 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:08:05.814 06:34:18 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:08:05.814 06:34:18 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:05.814 06:34:18 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:05.814 06:34:18 env -- common/autotest_common.sh@10 -- # set +x 00:08:05.814 ************************************ 00:08:05.814 START TEST env_mem_callbacks 00:08:05.814 ************************************ 00:08:05.814 06:34:18 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:08:05.814 EAL: Detected CPU lcores: 10 00:08:05.814 EAL: Detected NUMA nodes: 1 00:08:05.814 EAL: Detected shared linkage of DPDK 00:08:05.814 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:05.814 EAL: Selected IOVA mode 'PA' 00:08:05.814 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:05.814 00:08:05.814 00:08:05.814 CUnit - A unit testing framework for C - Version 2.1-3 00:08:05.814 http://cunit.sourceforge.net/ 00:08:05.814 00:08:05.814 00:08:05.814 Suite: memory 00:08:05.814 Test: test ... 00:08:05.814 register 0x200000200000 2097152 00:08:05.814 malloc 3145728 00:08:05.814 register 0x200000400000 4194304 00:08:05.814 buf 0x2000004fffc0 len 3145728 PASSED 00:08:05.814 malloc 64 00:08:05.814 buf 0x2000004ffec0 len 64 PASSED 00:08:05.814 malloc 4194304 00:08:05.814 register 0x200000800000 6291456 00:08:05.814 buf 0x2000009fffc0 len 4194304 PASSED 00:08:05.815 free 0x2000004fffc0 3145728 00:08:05.815 free 0x2000004ffec0 64 00:08:05.815 unregister 0x200000400000 4194304 PASSED 00:08:05.815 free 0x2000009fffc0 4194304 00:08:05.815 unregister 0x200000800000 6291456 PASSED 00:08:05.815 malloc 8388608 00:08:05.815 register 0x200000400000 10485760 00:08:05.815 buf 0x2000005fffc0 len 8388608 PASSED 00:08:05.815 free 0x2000005fffc0 8388608 00:08:05.815 unregister 0x200000400000 10485760 PASSED 00:08:05.815 passed 00:08:05.815 00:08:05.815 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.815 suites 1 1 n/a 0 0 00:08:05.815 tests 1 1 1 0 0 00:08:05.815 asserts 15 15 15 0 n/a 00:08:05.815 00:08:05.815 Elapsed time = 0.047 seconds 00:08:05.815 00:08:05.815 real 0m0.214s 00:08:05.815 user 0m0.064s 00:08:05.815 sys 0m0.049s 00:08:05.815 06:34:18 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:05.815 06:34:18 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:08:05.815 ************************************ 00:08:05.815 END TEST env_mem_callbacks 00:08:05.815 ************************************ 00:08:06.072 00:08:06.072 real 0m6.396s 00:08:06.072 user 0m5.055s 00:08:06.072 sys 0m0.990s 00:08:06.072 06:34:18 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:06.072 06:34:18 env -- common/autotest_common.sh@10 -- # set +x 00:08:06.072 ************************************ 00:08:06.072 END TEST env 00:08:06.072 ************************************ 00:08:06.072 06:34:18 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:08:06.072 06:34:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:06.072 06:34:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:06.072 06:34:18 -- common/autotest_common.sh@10 -- # set +x 00:08:06.072 ************************************ 00:08:06.072 START TEST rpc 00:08:06.072 ************************************ 00:08:06.072 06:34:18 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:08:06.072 * Looking for test storage... 00:08:06.072 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:08:06.072 06:34:18 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:06.072 06:34:18 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:06.072 06:34:18 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:08:06.072 06:34:18 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:06.072 06:34:18 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:06.072 06:34:18 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:06.072 06:34:18 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:06.072 06:34:18 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:06.072 06:34:18 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:06.072 06:34:18 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:06.072 06:34:18 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:06.072 06:34:18 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:06.072 06:34:18 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:06.072 06:34:18 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:06.072 06:34:18 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:06.072 06:34:18 rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:06.072 06:34:18 rpc -- scripts/common.sh@345 -- # : 1 00:08:06.072 06:34:18 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:06.072 06:34:18 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:06.072 06:34:18 rpc -- scripts/common.sh@365 -- # decimal 1 00:08:06.072 06:34:18 rpc -- scripts/common.sh@353 -- # local d=1 00:08:06.072 06:34:18 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:06.073 06:34:18 rpc -- scripts/common.sh@355 -- # echo 1 00:08:06.073 06:34:18 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:06.073 06:34:18 rpc -- scripts/common.sh@366 -- # decimal 2 00:08:06.073 06:34:18 rpc -- scripts/common.sh@353 -- # local d=2 00:08:06.073 06:34:18 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:06.073 06:34:18 rpc -- scripts/common.sh@355 -- # echo 2 00:08:06.073 06:34:18 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:06.073 06:34:18 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:06.073 06:34:18 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:06.073 06:34:18 rpc -- scripts/common.sh@368 -- # return 0 00:08:06.073 06:34:18 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:06.073 06:34:18 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:06.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.073 --rc genhtml_branch_coverage=1 00:08:06.073 --rc genhtml_function_coverage=1 00:08:06.073 --rc genhtml_legend=1 00:08:06.073 --rc geninfo_all_blocks=1 00:08:06.073 --rc geninfo_unexecuted_blocks=1 00:08:06.073 00:08:06.073 ' 00:08:06.073 06:34:18 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:06.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.073 --rc genhtml_branch_coverage=1 00:08:06.073 --rc genhtml_function_coverage=1 00:08:06.073 --rc genhtml_legend=1 00:08:06.073 --rc geninfo_all_blocks=1 00:08:06.073 --rc geninfo_unexecuted_blocks=1 00:08:06.073 00:08:06.073 ' 00:08:06.073 06:34:18 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:06.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.073 --rc genhtml_branch_coverage=1 00:08:06.073 --rc genhtml_function_coverage=1 00:08:06.073 --rc genhtml_legend=1 00:08:06.073 --rc geninfo_all_blocks=1 00:08:06.073 --rc geninfo_unexecuted_blocks=1 00:08:06.073 00:08:06.073 ' 00:08:06.073 06:34:18 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:06.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.073 --rc genhtml_branch_coverage=1 00:08:06.073 --rc genhtml_function_coverage=1 00:08:06.073 --rc genhtml_legend=1 00:08:06.073 --rc geninfo_all_blocks=1 00:08:06.073 --rc geninfo_unexecuted_blocks=1 00:08:06.073 00:08:06.073 ' 00:08:06.073 06:34:18 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57231 00:08:06.073 06:34:18 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:06.073 06:34:18 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57231 00:08:06.073 06:34:18 rpc -- common/autotest_common.sh@835 -- # '[' -z 57231 ']' 00:08:06.073 06:34:18 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.073 06:34:18 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:06.073 06:34:18 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.073 06:34:18 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:06.073 06:34:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:06.073 06:34:18 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:08:06.073 [2024-12-06 06:34:18.804716] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:08:06.073 [2024-12-06 06:34:18.804841] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57231 ] 00:08:06.331 [2024-12-06 06:34:18.968412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.589 [2024-12-06 06:34:19.122117] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:08:06.589 [2024-12-06 06:34:19.122179] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57231' to capture a snapshot of events at runtime. 00:08:06.589 [2024-12-06 06:34:19.122190] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:06.589 [2024-12-06 06:34:19.122199] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:06.589 [2024-12-06 06:34:19.122206] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57231 for offline analysis/debug. 00:08:06.589 [2024-12-06 06:34:19.123056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.156 06:34:19 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:07.156 06:34:19 rpc -- common/autotest_common.sh@868 -- # return 0 00:08:07.156 06:34:19 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:08:07.156 06:34:19 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:08:07.156 06:34:19 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:08:07.156 06:34:19 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:08:07.156 06:34:19 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:07.156 06:34:19 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:07.156 06:34:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.156 ************************************ 00:08:07.156 START TEST rpc_integrity 00:08:07.156 ************************************ 00:08:07.156 06:34:19 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:08:07.156 06:34:19 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:07.156 06:34:19 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.156 06:34:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:07.156 06:34:19 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.156 06:34:19 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:07.156 06:34:19 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:07.156 06:34:19 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:07.156 06:34:19 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:07.156 06:34:19 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.156 06:34:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:07.156 06:34:19 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.156 06:34:19 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:08:07.156 06:34:19 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:07.156 06:34:19 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.156 06:34:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:07.156 06:34:19 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.156 06:34:19 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:07.156 { 00:08:07.156 "name": "Malloc0", 00:08:07.156 "aliases": [ 00:08:07.156 "93e965dd-ef49-4af0-b9fb-5d3155be24dc" 00:08:07.156 ], 00:08:07.156 "product_name": "Malloc disk", 00:08:07.156 "block_size": 512, 00:08:07.156 "num_blocks": 16384, 00:08:07.156 "uuid": "93e965dd-ef49-4af0-b9fb-5d3155be24dc", 00:08:07.156 "assigned_rate_limits": { 00:08:07.156 "rw_ios_per_sec": 0, 00:08:07.156 "rw_mbytes_per_sec": 0, 00:08:07.156 "r_mbytes_per_sec": 0, 00:08:07.156 "w_mbytes_per_sec": 0 00:08:07.156 }, 00:08:07.156 "claimed": false, 00:08:07.156 "zoned": false, 00:08:07.156 "supported_io_types": { 00:08:07.156 "read": true, 00:08:07.156 "write": true, 00:08:07.156 "unmap": true, 00:08:07.156 "flush": true, 00:08:07.156 "reset": true, 00:08:07.156 "nvme_admin": false, 00:08:07.156 "nvme_io": false, 00:08:07.156 "nvme_io_md": false, 00:08:07.156 "write_zeroes": true, 00:08:07.156 "zcopy": true, 00:08:07.156 "get_zone_info": false, 00:08:07.156 "zone_management": false, 00:08:07.156 "zone_append": false, 00:08:07.156 "compare": false, 00:08:07.156 "compare_and_write": false, 00:08:07.156 "abort": true, 00:08:07.156 "seek_hole": false, 00:08:07.156 "seek_data": false, 00:08:07.156 "copy": true, 00:08:07.156 "nvme_iov_md": false 00:08:07.156 }, 00:08:07.156 "memory_domains": [ 00:08:07.156 { 00:08:07.156 "dma_device_id": "system", 00:08:07.156 "dma_device_type": 1 00:08:07.156 }, 00:08:07.156 { 00:08:07.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:07.156 "dma_device_type": 2 00:08:07.156 } 00:08:07.156 ], 00:08:07.156 "driver_specific": {} 00:08:07.156 } 00:08:07.156 ]' 00:08:07.156 06:34:19 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:07.156 06:34:19 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:07.156 06:34:19 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:08:07.156 06:34:19 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.156 06:34:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:07.156 [2024-12-06 06:34:19.834164] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:08:07.156 [2024-12-06 06:34:19.834223] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:07.156 [2024-12-06 06:34:19.834247] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:07.156 [2024-12-06 06:34:19.834258] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:07.156 [2024-12-06 06:34:19.836418] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:07.156 [2024-12-06 06:34:19.836471] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:07.156 Passthru0 00:08:07.156 06:34:19 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.156 06:34:19 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:07.156 06:34:19 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.156 06:34:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:07.156 06:34:19 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.156 06:34:19 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:07.156 { 00:08:07.156 "name": "Malloc0", 00:08:07.156 "aliases": [ 00:08:07.156 "93e965dd-ef49-4af0-b9fb-5d3155be24dc" 00:08:07.156 ], 00:08:07.156 "product_name": "Malloc disk", 00:08:07.156 "block_size": 512, 00:08:07.156 "num_blocks": 16384, 00:08:07.156 "uuid": "93e965dd-ef49-4af0-b9fb-5d3155be24dc", 00:08:07.156 "assigned_rate_limits": { 00:08:07.156 "rw_ios_per_sec": 0, 00:08:07.156 "rw_mbytes_per_sec": 0, 00:08:07.156 "r_mbytes_per_sec": 0, 00:08:07.156 "w_mbytes_per_sec": 0 00:08:07.156 }, 00:08:07.156 "claimed": true, 00:08:07.156 "claim_type": "exclusive_write", 00:08:07.156 "zoned": false, 00:08:07.156 "supported_io_types": { 00:08:07.156 "read": true, 00:08:07.156 "write": true, 00:08:07.156 "unmap": true, 00:08:07.156 "flush": true, 00:08:07.156 "reset": true, 00:08:07.156 "nvme_admin": false, 00:08:07.156 "nvme_io": false, 00:08:07.156 "nvme_io_md": false, 00:08:07.156 "write_zeroes": true, 00:08:07.156 "zcopy": true, 00:08:07.156 "get_zone_info": false, 00:08:07.156 "zone_management": false, 00:08:07.156 "zone_append": false, 00:08:07.156 "compare": false, 00:08:07.156 "compare_and_write": false, 00:08:07.156 "abort": true, 00:08:07.156 "seek_hole": false, 00:08:07.156 "seek_data": false, 00:08:07.156 "copy": true, 00:08:07.156 "nvme_iov_md": false 00:08:07.156 }, 00:08:07.156 "memory_domains": [ 00:08:07.156 { 00:08:07.156 "dma_device_id": "system", 00:08:07.156 "dma_device_type": 1 00:08:07.156 }, 00:08:07.156 { 00:08:07.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:07.156 "dma_device_type": 2 00:08:07.156 } 00:08:07.156 ], 00:08:07.156 "driver_specific": {} 00:08:07.156 }, 00:08:07.156 { 00:08:07.156 "name": "Passthru0", 00:08:07.156 "aliases": [ 00:08:07.156 "79a95106-08b1-555e-8d45-fb7ab8735500" 00:08:07.156 ], 00:08:07.156 "product_name": "passthru", 00:08:07.156 "block_size": 512, 00:08:07.156 "num_blocks": 16384, 00:08:07.156 "uuid": "79a95106-08b1-555e-8d45-fb7ab8735500", 00:08:07.156 "assigned_rate_limits": { 00:08:07.156 "rw_ios_per_sec": 0, 00:08:07.156 "rw_mbytes_per_sec": 0, 00:08:07.156 "r_mbytes_per_sec": 0, 00:08:07.156 "w_mbytes_per_sec": 0 00:08:07.156 }, 00:08:07.156 "claimed": false, 00:08:07.156 "zoned": false, 00:08:07.156 "supported_io_types": { 00:08:07.156 "read": true, 00:08:07.156 "write": true, 00:08:07.156 "unmap": true, 00:08:07.156 "flush": true, 00:08:07.156 "reset": true, 00:08:07.156 "nvme_admin": false, 00:08:07.156 "nvme_io": false, 00:08:07.156 "nvme_io_md": false, 00:08:07.156 "write_zeroes": true, 00:08:07.156 "zcopy": true, 00:08:07.156 "get_zone_info": false, 00:08:07.156 "zone_management": false, 00:08:07.156 "zone_append": false, 00:08:07.156 "compare": false, 00:08:07.156 "compare_and_write": false, 00:08:07.156 "abort": true, 00:08:07.156 "seek_hole": false, 00:08:07.156 "seek_data": false, 00:08:07.156 "copy": true, 00:08:07.156 "nvme_iov_md": false 00:08:07.156 }, 00:08:07.156 "memory_domains": [ 00:08:07.156 { 00:08:07.156 "dma_device_id": "system", 00:08:07.156 "dma_device_type": 1 00:08:07.156 }, 00:08:07.156 { 00:08:07.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:07.156 "dma_device_type": 2 00:08:07.156 } 00:08:07.156 ], 00:08:07.156 "driver_specific": { 00:08:07.156 "passthru": { 00:08:07.156 "name": "Passthru0", 00:08:07.156 "base_bdev_name": "Malloc0" 00:08:07.156 } 00:08:07.156 } 00:08:07.156 } 00:08:07.156 ]' 00:08:07.156 06:34:19 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:07.156 06:34:19 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:07.156 06:34:19 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:07.156 06:34:19 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.156 06:34:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:07.156 06:34:19 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.156 06:34:19 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:08:07.156 06:34:19 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.156 06:34:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:07.413 06:34:19 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.413 06:34:19 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:07.413 06:34:19 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.413 06:34:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:07.413 06:34:19 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.413 06:34:19 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:07.413 06:34:19 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:07.413 06:34:19 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:07.413 00:08:07.413 real 0m0.237s 00:08:07.413 user 0m0.123s 00:08:07.413 sys 0m0.031s 00:08:07.413 ************************************ 00:08:07.413 END TEST rpc_integrity 00:08:07.413 ************************************ 00:08:07.413 06:34:19 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:07.413 06:34:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:07.413 06:34:19 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:08:07.413 06:34:19 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:07.413 06:34:19 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:07.413 06:34:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.413 ************************************ 00:08:07.413 START TEST rpc_plugins 00:08:07.413 ************************************ 00:08:07.413 06:34:19 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:08:07.413 06:34:19 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:08:07.413 06:34:19 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.413 06:34:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:07.413 06:34:20 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.413 06:34:20 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:08:07.413 06:34:20 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:08:07.413 06:34:20 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.413 06:34:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:07.413 06:34:20 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.413 06:34:20 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:08:07.413 { 00:08:07.413 "name": "Malloc1", 00:08:07.413 "aliases": [ 00:08:07.413 "1e20c148-00ac-426e-b7af-d39794afacf7" 00:08:07.414 ], 00:08:07.414 "product_name": "Malloc disk", 00:08:07.414 "block_size": 4096, 00:08:07.414 "num_blocks": 256, 00:08:07.414 "uuid": "1e20c148-00ac-426e-b7af-d39794afacf7", 00:08:07.414 "assigned_rate_limits": { 00:08:07.414 "rw_ios_per_sec": 0, 00:08:07.414 "rw_mbytes_per_sec": 0, 00:08:07.414 "r_mbytes_per_sec": 0, 00:08:07.414 "w_mbytes_per_sec": 0 00:08:07.414 }, 00:08:07.414 "claimed": false, 00:08:07.414 "zoned": false, 00:08:07.414 "supported_io_types": { 00:08:07.414 "read": true, 00:08:07.414 "write": true, 00:08:07.414 "unmap": true, 00:08:07.414 "flush": true, 00:08:07.414 "reset": true, 00:08:07.414 "nvme_admin": false, 00:08:07.414 "nvme_io": false, 00:08:07.414 "nvme_io_md": false, 00:08:07.414 "write_zeroes": true, 00:08:07.414 "zcopy": true, 00:08:07.414 "get_zone_info": false, 00:08:07.414 "zone_management": false, 00:08:07.414 "zone_append": false, 00:08:07.414 "compare": false, 00:08:07.414 "compare_and_write": false, 00:08:07.414 "abort": true, 00:08:07.414 "seek_hole": false, 00:08:07.414 "seek_data": false, 00:08:07.414 "copy": true, 00:08:07.414 "nvme_iov_md": false 00:08:07.414 }, 00:08:07.414 "memory_domains": [ 00:08:07.414 { 00:08:07.414 "dma_device_id": "system", 00:08:07.414 "dma_device_type": 1 00:08:07.414 }, 00:08:07.414 { 00:08:07.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:07.414 "dma_device_type": 2 00:08:07.414 } 00:08:07.414 ], 00:08:07.414 "driver_specific": {} 00:08:07.414 } 00:08:07.414 ]' 00:08:07.414 06:34:20 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:08:07.414 06:34:20 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:08:07.414 06:34:20 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:08:07.414 06:34:20 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.414 06:34:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:07.414 06:34:20 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.414 06:34:20 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:08:07.414 06:34:20 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.414 06:34:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:07.414 06:34:20 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.414 06:34:20 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:08:07.414 06:34:20 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:08:07.414 06:34:20 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:08:07.414 00:08:07.414 real 0m0.111s 00:08:07.414 user 0m0.060s 00:08:07.414 sys 0m0.017s 00:08:07.414 06:34:20 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:07.414 ************************************ 00:08:07.414 END TEST rpc_plugins 00:08:07.414 ************************************ 00:08:07.414 06:34:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:07.414 06:34:20 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:08:07.414 06:34:20 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:07.414 06:34:20 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:07.414 06:34:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.414 ************************************ 00:08:07.414 START TEST rpc_trace_cmd_test 00:08:07.414 ************************************ 00:08:07.414 06:34:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:08:07.414 06:34:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:08:07.414 06:34:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:08:07.414 06:34:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.414 06:34:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.414 06:34:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.414 06:34:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:08:07.414 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57231", 00:08:07.414 "tpoint_group_mask": "0x8", 00:08:07.414 "iscsi_conn": { 00:08:07.414 "mask": "0x2", 00:08:07.414 "tpoint_mask": "0x0" 00:08:07.414 }, 00:08:07.414 "scsi": { 00:08:07.414 "mask": "0x4", 00:08:07.414 "tpoint_mask": "0x0" 00:08:07.414 }, 00:08:07.414 "bdev": { 00:08:07.414 "mask": "0x8", 00:08:07.414 "tpoint_mask": "0xffffffffffffffff" 00:08:07.414 }, 00:08:07.414 "nvmf_rdma": { 00:08:07.414 "mask": "0x10", 00:08:07.414 "tpoint_mask": "0x0" 00:08:07.414 }, 00:08:07.414 "nvmf_tcp": { 00:08:07.414 "mask": "0x20", 00:08:07.414 "tpoint_mask": "0x0" 00:08:07.414 }, 00:08:07.414 "ftl": { 00:08:07.414 "mask": "0x40", 00:08:07.414 "tpoint_mask": "0x0" 00:08:07.414 }, 00:08:07.414 "blobfs": { 00:08:07.414 "mask": "0x80", 00:08:07.414 "tpoint_mask": "0x0" 00:08:07.414 }, 00:08:07.414 "dsa": { 00:08:07.414 "mask": "0x200", 00:08:07.414 "tpoint_mask": "0x0" 00:08:07.414 }, 00:08:07.414 "thread": { 00:08:07.414 "mask": "0x400", 00:08:07.414 "tpoint_mask": "0x0" 00:08:07.414 }, 00:08:07.414 "nvme_pcie": { 00:08:07.414 "mask": "0x800", 00:08:07.414 "tpoint_mask": "0x0" 00:08:07.414 }, 00:08:07.414 "iaa": { 00:08:07.414 "mask": "0x1000", 00:08:07.414 "tpoint_mask": "0x0" 00:08:07.414 }, 00:08:07.414 "nvme_tcp": { 00:08:07.414 "mask": "0x2000", 00:08:07.414 "tpoint_mask": "0x0" 00:08:07.414 }, 00:08:07.414 "bdev_nvme": { 00:08:07.414 "mask": "0x4000", 00:08:07.414 "tpoint_mask": "0x0" 00:08:07.414 }, 00:08:07.414 "sock": { 00:08:07.414 "mask": "0x8000", 00:08:07.414 "tpoint_mask": "0x0" 00:08:07.414 }, 00:08:07.414 "blob": { 00:08:07.414 "mask": "0x10000", 00:08:07.414 "tpoint_mask": "0x0" 00:08:07.414 }, 00:08:07.414 "bdev_raid": { 00:08:07.414 "mask": "0x20000", 00:08:07.414 "tpoint_mask": "0x0" 00:08:07.414 }, 00:08:07.414 "scheduler": { 00:08:07.414 "mask": "0x40000", 00:08:07.414 "tpoint_mask": "0x0" 00:08:07.414 } 00:08:07.414 }' 00:08:07.672 06:34:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:08:07.672 06:34:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:08:07.672 06:34:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:08:07.672 06:34:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:08:07.672 06:34:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:08:07.672 06:34:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:08:07.672 06:34:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:08:07.672 06:34:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:08:07.672 06:34:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:08:07.672 06:34:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:08:07.672 00:08:07.672 real 0m0.166s 00:08:07.672 user 0m0.136s 00:08:07.672 sys 0m0.020s 00:08:07.672 ************************************ 00:08:07.672 06:34:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:07.672 06:34:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.672 END TEST rpc_trace_cmd_test 00:08:07.672 ************************************ 00:08:07.672 06:34:20 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:08:07.672 06:34:20 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:08:07.672 06:34:20 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:08:07.672 06:34:20 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:07.672 06:34:20 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:07.672 06:34:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.672 ************************************ 00:08:07.672 START TEST rpc_daemon_integrity 00:08:07.672 ************************************ 00:08:07.672 06:34:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:08:07.672 06:34:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:07.672 06:34:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.672 06:34:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:07.672 06:34:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.672 06:34:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:07.672 06:34:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:07.672 06:34:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:07.672 06:34:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:07.672 06:34:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.672 06:34:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:07.672 06:34:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.672 06:34:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:08:07.672 06:34:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:07.672 06:34:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.672 06:34:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:07.929 06:34:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.929 06:34:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:07.929 { 00:08:07.929 "name": "Malloc2", 00:08:07.929 "aliases": [ 00:08:07.929 "55fc4d64-c261-49dd-b5c0-ecb0182ef1c4" 00:08:07.929 ], 00:08:07.929 "product_name": "Malloc disk", 00:08:07.929 "block_size": 512, 00:08:07.929 "num_blocks": 16384, 00:08:07.929 "uuid": "55fc4d64-c261-49dd-b5c0-ecb0182ef1c4", 00:08:07.929 "assigned_rate_limits": { 00:08:07.929 "rw_ios_per_sec": 0, 00:08:07.929 "rw_mbytes_per_sec": 0, 00:08:07.929 "r_mbytes_per_sec": 0, 00:08:07.929 "w_mbytes_per_sec": 0 00:08:07.929 }, 00:08:07.929 "claimed": false, 00:08:07.929 "zoned": false, 00:08:07.929 "supported_io_types": { 00:08:07.929 "read": true, 00:08:07.929 "write": true, 00:08:07.929 "unmap": true, 00:08:07.929 "flush": true, 00:08:07.929 "reset": true, 00:08:07.929 "nvme_admin": false, 00:08:07.929 "nvme_io": false, 00:08:07.929 "nvme_io_md": false, 00:08:07.929 "write_zeroes": true, 00:08:07.929 "zcopy": true, 00:08:07.929 "get_zone_info": false, 00:08:07.929 "zone_management": false, 00:08:07.929 "zone_append": false, 00:08:07.929 "compare": false, 00:08:07.929 "compare_and_write": false, 00:08:07.929 "abort": true, 00:08:07.929 "seek_hole": false, 00:08:07.929 "seek_data": false, 00:08:07.929 "copy": true, 00:08:07.929 "nvme_iov_md": false 00:08:07.929 }, 00:08:07.929 "memory_domains": [ 00:08:07.929 { 00:08:07.929 "dma_device_id": "system", 00:08:07.929 "dma_device_type": 1 00:08:07.929 }, 00:08:07.929 { 00:08:07.929 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:07.929 "dma_device_type": 2 00:08:07.929 } 00:08:07.929 ], 00:08:07.929 "driver_specific": {} 00:08:07.929 } 00:08:07.929 ]' 00:08:07.929 06:34:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:07.929 06:34:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:07.929 06:34:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:08:07.929 06:34:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.929 06:34:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:07.930 [2024-12-06 06:34:20.449508] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:08:07.930 [2024-12-06 06:34:20.449569] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:07.930 [2024-12-06 06:34:20.449590] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:07.930 [2024-12-06 06:34:20.449601] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:07.930 [2024-12-06 06:34:20.451789] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:07.930 [2024-12-06 06:34:20.451830] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:07.930 Passthru0 00:08:07.930 06:34:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.930 06:34:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:07.930 06:34:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.930 06:34:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:07.930 06:34:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.930 06:34:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:07.930 { 00:08:07.930 "name": "Malloc2", 00:08:07.930 "aliases": [ 00:08:07.930 "55fc4d64-c261-49dd-b5c0-ecb0182ef1c4" 00:08:07.930 ], 00:08:07.930 "product_name": "Malloc disk", 00:08:07.930 "block_size": 512, 00:08:07.930 "num_blocks": 16384, 00:08:07.930 "uuid": "55fc4d64-c261-49dd-b5c0-ecb0182ef1c4", 00:08:07.930 "assigned_rate_limits": { 00:08:07.930 "rw_ios_per_sec": 0, 00:08:07.930 "rw_mbytes_per_sec": 0, 00:08:07.930 "r_mbytes_per_sec": 0, 00:08:07.930 "w_mbytes_per_sec": 0 00:08:07.930 }, 00:08:07.930 "claimed": true, 00:08:07.930 "claim_type": "exclusive_write", 00:08:07.930 "zoned": false, 00:08:07.930 "supported_io_types": { 00:08:07.930 "read": true, 00:08:07.930 "write": true, 00:08:07.930 "unmap": true, 00:08:07.930 "flush": true, 00:08:07.930 "reset": true, 00:08:07.930 "nvme_admin": false, 00:08:07.930 "nvme_io": false, 00:08:07.930 "nvme_io_md": false, 00:08:07.930 "write_zeroes": true, 00:08:07.930 "zcopy": true, 00:08:07.930 "get_zone_info": false, 00:08:07.930 "zone_management": false, 00:08:07.930 "zone_append": false, 00:08:07.930 "compare": false, 00:08:07.930 "compare_and_write": false, 00:08:07.930 "abort": true, 00:08:07.930 "seek_hole": false, 00:08:07.930 "seek_data": false, 00:08:07.930 "copy": true, 00:08:07.930 "nvme_iov_md": false 00:08:07.930 }, 00:08:07.930 "memory_domains": [ 00:08:07.930 { 00:08:07.930 "dma_device_id": "system", 00:08:07.930 "dma_device_type": 1 00:08:07.930 }, 00:08:07.930 { 00:08:07.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:07.930 "dma_device_type": 2 00:08:07.930 } 00:08:07.930 ], 00:08:07.930 "driver_specific": {} 00:08:07.930 }, 00:08:07.930 { 00:08:07.930 "name": "Passthru0", 00:08:07.930 "aliases": [ 00:08:07.930 "1dd00e95-fc60-55b8-a04a-9c1e5d5f9ffd" 00:08:07.930 ], 00:08:07.930 "product_name": "passthru", 00:08:07.930 "block_size": 512, 00:08:07.930 "num_blocks": 16384, 00:08:07.930 "uuid": "1dd00e95-fc60-55b8-a04a-9c1e5d5f9ffd", 00:08:07.930 "assigned_rate_limits": { 00:08:07.930 "rw_ios_per_sec": 0, 00:08:07.930 "rw_mbytes_per_sec": 0, 00:08:07.930 "r_mbytes_per_sec": 0, 00:08:07.930 "w_mbytes_per_sec": 0 00:08:07.930 }, 00:08:07.930 "claimed": false, 00:08:07.930 "zoned": false, 00:08:07.930 "supported_io_types": { 00:08:07.930 "read": true, 00:08:07.930 "write": true, 00:08:07.930 "unmap": true, 00:08:07.930 "flush": true, 00:08:07.930 "reset": true, 00:08:07.930 "nvme_admin": false, 00:08:07.930 "nvme_io": false, 00:08:07.930 "nvme_io_md": false, 00:08:07.930 "write_zeroes": true, 00:08:07.930 "zcopy": true, 00:08:07.930 "get_zone_info": false, 00:08:07.930 "zone_management": false, 00:08:07.930 "zone_append": false, 00:08:07.930 "compare": false, 00:08:07.930 "compare_and_write": false, 00:08:07.930 "abort": true, 00:08:07.930 "seek_hole": false, 00:08:07.930 "seek_data": false, 00:08:07.930 "copy": true, 00:08:07.930 "nvme_iov_md": false 00:08:07.930 }, 00:08:07.930 "memory_domains": [ 00:08:07.930 { 00:08:07.930 "dma_device_id": "system", 00:08:07.930 "dma_device_type": 1 00:08:07.930 }, 00:08:07.930 { 00:08:07.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:07.930 "dma_device_type": 2 00:08:07.930 } 00:08:07.930 ], 00:08:07.930 "driver_specific": { 00:08:07.930 "passthru": { 00:08:07.930 "name": "Passthru0", 00:08:07.930 "base_bdev_name": "Malloc2" 00:08:07.930 } 00:08:07.930 } 00:08:07.930 } 00:08:07.930 ]' 00:08:07.930 06:34:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:07.930 06:34:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:07.930 06:34:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:07.930 06:34:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.930 06:34:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:07.930 06:34:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.930 06:34:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:08:07.930 06:34:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.930 06:34:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:07.930 06:34:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.930 06:34:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:07.930 06:34:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.930 06:34:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:07.930 06:34:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.930 06:34:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:07.930 06:34:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:07.930 06:34:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:07.930 00:08:07.930 real 0m0.242s 00:08:07.930 user 0m0.127s 00:08:07.930 sys 0m0.032s 00:08:07.930 06:34:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:07.930 ************************************ 00:08:07.930 END TEST rpc_daemon_integrity 00:08:07.930 ************************************ 00:08:07.930 06:34:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:07.930 06:34:20 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:08:07.930 06:34:20 rpc -- rpc/rpc.sh@84 -- # killprocess 57231 00:08:07.930 06:34:20 rpc -- common/autotest_common.sh@954 -- # '[' -z 57231 ']' 00:08:07.930 06:34:20 rpc -- common/autotest_common.sh@958 -- # kill -0 57231 00:08:07.930 06:34:20 rpc -- common/autotest_common.sh@959 -- # uname 00:08:07.930 06:34:20 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:07.930 06:34:20 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57231 00:08:07.930 06:34:20 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:07.930 killing process with pid 57231 00:08:07.930 06:34:20 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:07.930 06:34:20 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57231' 00:08:07.930 06:34:20 rpc -- common/autotest_common.sh@973 -- # kill 57231 00:08:07.930 06:34:20 rpc -- common/autotest_common.sh@978 -- # wait 57231 00:08:09.825 00:08:09.825 real 0m3.550s 00:08:09.825 user 0m3.955s 00:08:09.825 sys 0m0.580s 00:08:09.825 ************************************ 00:08:09.825 END TEST rpc 00:08:09.825 ************************************ 00:08:09.825 06:34:22 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:09.825 06:34:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:09.825 06:34:22 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:08:09.825 06:34:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:09.825 06:34:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:09.825 06:34:22 -- common/autotest_common.sh@10 -- # set +x 00:08:09.825 ************************************ 00:08:09.825 START TEST skip_rpc 00:08:09.825 ************************************ 00:08:09.825 06:34:22 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:08:09.825 * Looking for test storage... 00:08:09.825 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:08:09.825 06:34:22 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:09.825 06:34:22 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:08:09.825 06:34:22 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:09.825 06:34:22 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:09.825 06:34:22 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:09.825 06:34:22 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:09.825 06:34:22 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:09.825 06:34:22 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:09.825 06:34:22 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:09.825 06:34:22 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:09.825 06:34:22 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:09.825 06:34:22 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:09.825 06:34:22 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:09.825 06:34:22 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:09.825 06:34:22 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:09.825 06:34:22 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:09.825 06:34:22 skip_rpc -- scripts/common.sh@345 -- # : 1 00:08:09.825 06:34:22 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:09.825 06:34:22 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:09.825 06:34:22 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:08:09.825 06:34:22 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:08:09.825 06:34:22 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:09.825 06:34:22 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:08:09.825 06:34:22 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:09.825 06:34:22 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:08:09.825 06:34:22 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:08:09.825 06:34:22 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:09.825 06:34:22 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:08:09.825 06:34:22 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:09.825 06:34:22 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:09.825 06:34:22 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:09.825 06:34:22 skip_rpc -- scripts/common.sh@368 -- # return 0 00:08:09.825 06:34:22 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:09.825 06:34:22 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:09.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.826 --rc genhtml_branch_coverage=1 00:08:09.826 --rc genhtml_function_coverage=1 00:08:09.826 --rc genhtml_legend=1 00:08:09.826 --rc geninfo_all_blocks=1 00:08:09.826 --rc geninfo_unexecuted_blocks=1 00:08:09.826 00:08:09.826 ' 00:08:09.826 06:34:22 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:09.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.826 --rc genhtml_branch_coverage=1 00:08:09.826 --rc genhtml_function_coverage=1 00:08:09.826 --rc genhtml_legend=1 00:08:09.826 --rc geninfo_all_blocks=1 00:08:09.826 --rc geninfo_unexecuted_blocks=1 00:08:09.826 00:08:09.826 ' 00:08:09.826 06:34:22 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:09.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.826 --rc genhtml_branch_coverage=1 00:08:09.826 --rc genhtml_function_coverage=1 00:08:09.826 --rc genhtml_legend=1 00:08:09.826 --rc geninfo_all_blocks=1 00:08:09.826 --rc geninfo_unexecuted_blocks=1 00:08:09.826 00:08:09.826 ' 00:08:09.826 06:34:22 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:09.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.826 --rc genhtml_branch_coverage=1 00:08:09.826 --rc genhtml_function_coverage=1 00:08:09.826 --rc genhtml_legend=1 00:08:09.826 --rc geninfo_all_blocks=1 00:08:09.826 --rc geninfo_unexecuted_blocks=1 00:08:09.826 00:08:09.826 ' 00:08:09.826 06:34:22 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:09.826 06:34:22 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:08:09.826 06:34:22 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:08:09.826 06:34:22 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:09.826 06:34:22 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:09.826 06:34:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:09.826 ************************************ 00:08:09.826 START TEST skip_rpc 00:08:09.826 ************************************ 00:08:09.826 06:34:22 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:08:09.826 06:34:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57444 00:08:09.826 06:34:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:09.826 06:34:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:08:09.826 06:34:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:08:09.826 [2024-12-06 06:34:22.408041] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:08:09.826 [2024-12-06 06:34:22.408137] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57444 ] 00:08:09.826 [2024-12-06 06:34:22.562480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.083 [2024-12-06 06:34:22.663770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.370 06:34:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:08:15.370 06:34:27 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:08:15.370 06:34:27 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:08:15.370 06:34:27 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:15.370 06:34:27 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:15.370 06:34:27 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:15.370 06:34:27 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:15.370 06:34:27 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:08:15.370 06:34:27 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.370 06:34:27 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:15.370 06:34:27 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:15.370 06:34:27 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:08:15.370 06:34:27 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:15.370 06:34:27 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:15.370 06:34:27 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:15.370 06:34:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:08:15.370 06:34:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57444 00:08:15.370 06:34:27 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57444 ']' 00:08:15.370 06:34:27 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57444 00:08:15.370 06:34:27 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:08:15.370 06:34:27 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:15.370 06:34:27 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57444 00:08:15.370 killing process with pid 57444 00:08:15.370 06:34:27 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:15.370 06:34:27 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:15.370 06:34:27 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57444' 00:08:15.370 06:34:27 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57444 00:08:15.370 06:34:27 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57444 00:08:15.938 00:08:15.938 real 0m6.234s 00:08:15.938 user 0m5.854s 00:08:15.938 sys 0m0.274s 00:08:15.938 06:34:28 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:15.938 06:34:28 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:15.938 ************************************ 00:08:15.938 END TEST skip_rpc 00:08:15.938 ************************************ 00:08:15.938 06:34:28 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:08:15.938 06:34:28 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:15.938 06:34:28 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:15.938 06:34:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:15.938 ************************************ 00:08:15.938 START TEST skip_rpc_with_json 00:08:15.938 ************************************ 00:08:15.938 06:34:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:08:15.938 06:34:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:08:15.938 06:34:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57537 00:08:15.938 06:34:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:15.938 06:34:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57537 00:08:15.938 06:34:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57537 ']' 00:08:15.938 06:34:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.938 06:34:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:15.938 06:34:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.938 06:34:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:15.938 06:34:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:15.939 06:34:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:16.210 [2024-12-06 06:34:28.682983] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:08:16.210 [2024-12-06 06:34:28.683086] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57537 ] 00:08:16.210 [2024-12-06 06:34:28.834576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.210 [2024-12-06 06:34:28.918871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.143 06:34:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:17.143 06:34:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:08:17.143 06:34:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:08:17.143 06:34:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.143 06:34:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:17.143 [2024-12-06 06:34:29.564271] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:08:17.143 request: 00:08:17.143 { 00:08:17.143 "trtype": "tcp", 00:08:17.143 "method": "nvmf_get_transports", 00:08:17.143 "req_id": 1 00:08:17.143 } 00:08:17.143 Got JSON-RPC error response 00:08:17.143 response: 00:08:17.143 { 00:08:17.143 "code": -19, 00:08:17.143 "message": "No such device" 00:08:17.143 } 00:08:17.143 06:34:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:17.143 06:34:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:08:17.143 06:34:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.143 06:34:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:17.143 [2024-12-06 06:34:29.572352] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:17.143 06:34:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.143 06:34:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:08:17.143 06:34:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.143 06:34:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:17.143 06:34:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.143 06:34:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:17.143 { 00:08:17.143 "subsystems": [ 00:08:17.143 { 00:08:17.143 "subsystem": "fsdev", 00:08:17.143 "config": [ 00:08:17.143 { 00:08:17.143 "method": "fsdev_set_opts", 00:08:17.143 "params": { 00:08:17.143 "fsdev_io_pool_size": 65535, 00:08:17.143 "fsdev_io_cache_size": 256 00:08:17.143 } 00:08:17.143 } 00:08:17.143 ] 00:08:17.143 }, 00:08:17.143 { 00:08:17.143 "subsystem": "keyring", 00:08:17.143 "config": [] 00:08:17.143 }, 00:08:17.143 { 00:08:17.143 "subsystem": "iobuf", 00:08:17.143 "config": [ 00:08:17.143 { 00:08:17.143 "method": "iobuf_set_options", 00:08:17.143 "params": { 00:08:17.143 "small_pool_count": 8192, 00:08:17.143 "large_pool_count": 1024, 00:08:17.143 "small_bufsize": 8192, 00:08:17.143 "large_bufsize": 135168, 00:08:17.143 "enable_numa": false 00:08:17.143 } 00:08:17.143 } 00:08:17.143 ] 00:08:17.143 }, 00:08:17.143 { 00:08:17.143 "subsystem": "sock", 00:08:17.143 "config": [ 00:08:17.143 { 00:08:17.143 "method": "sock_set_default_impl", 00:08:17.143 "params": { 00:08:17.143 "impl_name": "posix" 00:08:17.143 } 00:08:17.143 }, 00:08:17.143 { 00:08:17.143 "method": "sock_impl_set_options", 00:08:17.143 "params": { 00:08:17.143 "impl_name": "ssl", 00:08:17.143 "recv_buf_size": 4096, 00:08:17.143 "send_buf_size": 4096, 00:08:17.143 "enable_recv_pipe": true, 00:08:17.143 "enable_quickack": false, 00:08:17.143 "enable_placement_id": 0, 00:08:17.143 "enable_zerocopy_send_server": true, 00:08:17.143 "enable_zerocopy_send_client": false, 00:08:17.143 "zerocopy_threshold": 0, 00:08:17.143 "tls_version": 0, 00:08:17.143 "enable_ktls": false 00:08:17.143 } 00:08:17.143 }, 00:08:17.143 { 00:08:17.143 "method": "sock_impl_set_options", 00:08:17.143 "params": { 00:08:17.143 "impl_name": "posix", 00:08:17.143 "recv_buf_size": 2097152, 00:08:17.143 "send_buf_size": 2097152, 00:08:17.143 "enable_recv_pipe": true, 00:08:17.143 "enable_quickack": false, 00:08:17.144 "enable_placement_id": 0, 00:08:17.144 "enable_zerocopy_send_server": true, 00:08:17.144 "enable_zerocopy_send_client": false, 00:08:17.144 "zerocopy_threshold": 0, 00:08:17.144 "tls_version": 0, 00:08:17.144 "enable_ktls": false 00:08:17.144 } 00:08:17.144 } 00:08:17.144 ] 00:08:17.144 }, 00:08:17.144 { 00:08:17.144 "subsystem": "vmd", 00:08:17.144 "config": [] 00:08:17.144 }, 00:08:17.144 { 00:08:17.144 "subsystem": "accel", 00:08:17.144 "config": [ 00:08:17.144 { 00:08:17.144 "method": "accel_set_options", 00:08:17.144 "params": { 00:08:17.144 "small_cache_size": 128, 00:08:17.144 "large_cache_size": 16, 00:08:17.144 "task_count": 2048, 00:08:17.144 "sequence_count": 2048, 00:08:17.144 "buf_count": 2048 00:08:17.144 } 00:08:17.144 } 00:08:17.144 ] 00:08:17.144 }, 00:08:17.144 { 00:08:17.144 "subsystem": "bdev", 00:08:17.144 "config": [ 00:08:17.144 { 00:08:17.144 "method": "bdev_set_options", 00:08:17.144 "params": { 00:08:17.144 "bdev_io_pool_size": 65535, 00:08:17.144 "bdev_io_cache_size": 256, 00:08:17.144 "bdev_auto_examine": true, 00:08:17.144 "iobuf_small_cache_size": 128, 00:08:17.144 "iobuf_large_cache_size": 16 00:08:17.144 } 00:08:17.144 }, 00:08:17.144 { 00:08:17.144 "method": "bdev_raid_set_options", 00:08:17.144 "params": { 00:08:17.144 "process_window_size_kb": 1024, 00:08:17.144 "process_max_bandwidth_mb_sec": 0 00:08:17.144 } 00:08:17.144 }, 00:08:17.144 { 00:08:17.144 "method": "bdev_iscsi_set_options", 00:08:17.144 "params": { 00:08:17.144 "timeout_sec": 30 00:08:17.144 } 00:08:17.144 }, 00:08:17.144 { 00:08:17.144 "method": "bdev_nvme_set_options", 00:08:17.144 "params": { 00:08:17.144 "action_on_timeout": "none", 00:08:17.144 "timeout_us": 0, 00:08:17.144 "timeout_admin_us": 0, 00:08:17.144 "keep_alive_timeout_ms": 10000, 00:08:17.144 "arbitration_burst": 0, 00:08:17.144 "low_priority_weight": 0, 00:08:17.144 "medium_priority_weight": 0, 00:08:17.144 "high_priority_weight": 0, 00:08:17.144 "nvme_adminq_poll_period_us": 10000, 00:08:17.144 "nvme_ioq_poll_period_us": 0, 00:08:17.144 "io_queue_requests": 0, 00:08:17.144 "delay_cmd_submit": true, 00:08:17.144 "transport_retry_count": 4, 00:08:17.144 "bdev_retry_count": 3, 00:08:17.144 "transport_ack_timeout": 0, 00:08:17.144 "ctrlr_loss_timeout_sec": 0, 00:08:17.144 "reconnect_delay_sec": 0, 00:08:17.144 "fast_io_fail_timeout_sec": 0, 00:08:17.144 "disable_auto_failback": false, 00:08:17.144 "generate_uuids": false, 00:08:17.144 "transport_tos": 0, 00:08:17.144 "nvme_error_stat": false, 00:08:17.144 "rdma_srq_size": 0, 00:08:17.144 "io_path_stat": false, 00:08:17.144 "allow_accel_sequence": false, 00:08:17.144 "rdma_max_cq_size": 0, 00:08:17.144 "rdma_cm_event_timeout_ms": 0, 00:08:17.144 "dhchap_digests": [ 00:08:17.144 "sha256", 00:08:17.144 "sha384", 00:08:17.144 "sha512" 00:08:17.144 ], 00:08:17.144 "dhchap_dhgroups": [ 00:08:17.144 "null", 00:08:17.144 "ffdhe2048", 00:08:17.144 "ffdhe3072", 00:08:17.144 "ffdhe4096", 00:08:17.144 "ffdhe6144", 00:08:17.144 "ffdhe8192" 00:08:17.144 ] 00:08:17.144 } 00:08:17.144 }, 00:08:17.144 { 00:08:17.144 "method": "bdev_nvme_set_hotplug", 00:08:17.144 "params": { 00:08:17.144 "period_us": 100000, 00:08:17.144 "enable": false 00:08:17.144 } 00:08:17.144 }, 00:08:17.144 { 00:08:17.144 "method": "bdev_wait_for_examine" 00:08:17.144 } 00:08:17.144 ] 00:08:17.144 }, 00:08:17.144 { 00:08:17.144 "subsystem": "scsi", 00:08:17.144 "config": null 00:08:17.144 }, 00:08:17.144 { 00:08:17.144 "subsystem": "scheduler", 00:08:17.144 "config": [ 00:08:17.144 { 00:08:17.144 "method": "framework_set_scheduler", 00:08:17.144 "params": { 00:08:17.144 "name": "static" 00:08:17.144 } 00:08:17.144 } 00:08:17.144 ] 00:08:17.144 }, 00:08:17.144 { 00:08:17.144 "subsystem": "vhost_scsi", 00:08:17.144 "config": [] 00:08:17.144 }, 00:08:17.144 { 00:08:17.144 "subsystem": "vhost_blk", 00:08:17.144 "config": [] 00:08:17.144 }, 00:08:17.144 { 00:08:17.144 "subsystem": "ublk", 00:08:17.144 "config": [] 00:08:17.144 }, 00:08:17.144 { 00:08:17.144 "subsystem": "nbd", 00:08:17.144 "config": [] 00:08:17.144 }, 00:08:17.144 { 00:08:17.144 "subsystem": "nvmf", 00:08:17.144 "config": [ 00:08:17.144 { 00:08:17.144 "method": "nvmf_set_config", 00:08:17.144 "params": { 00:08:17.144 "discovery_filter": "match_any", 00:08:17.144 "admin_cmd_passthru": { 00:08:17.144 "identify_ctrlr": false 00:08:17.144 }, 00:08:17.144 "dhchap_digests": [ 00:08:17.144 "sha256", 00:08:17.144 "sha384", 00:08:17.144 "sha512" 00:08:17.144 ], 00:08:17.144 "dhchap_dhgroups": [ 00:08:17.144 "null", 00:08:17.144 "ffdhe2048", 00:08:17.144 "ffdhe3072", 00:08:17.144 "ffdhe4096", 00:08:17.144 "ffdhe6144", 00:08:17.144 "ffdhe8192" 00:08:17.144 ] 00:08:17.144 } 00:08:17.144 }, 00:08:17.144 { 00:08:17.144 "method": "nvmf_set_max_subsystems", 00:08:17.144 "params": { 00:08:17.144 "max_subsystems": 1024 00:08:17.144 } 00:08:17.144 }, 00:08:17.144 { 00:08:17.144 "method": "nvmf_set_crdt", 00:08:17.144 "params": { 00:08:17.144 "crdt1": 0, 00:08:17.144 "crdt2": 0, 00:08:17.144 "crdt3": 0 00:08:17.144 } 00:08:17.144 }, 00:08:17.144 { 00:08:17.144 "method": "nvmf_create_transport", 00:08:17.144 "params": { 00:08:17.144 "trtype": "TCP", 00:08:17.144 "max_queue_depth": 128, 00:08:17.144 "max_io_qpairs_per_ctrlr": 127, 00:08:17.144 "in_capsule_data_size": 4096, 00:08:17.144 "max_io_size": 131072, 00:08:17.144 "io_unit_size": 131072, 00:08:17.144 "max_aq_depth": 128, 00:08:17.144 "num_shared_buffers": 511, 00:08:17.144 "buf_cache_size": 4294967295, 00:08:17.144 "dif_insert_or_strip": false, 00:08:17.144 "zcopy": false, 00:08:17.144 "c2h_success": true, 00:08:17.144 "sock_priority": 0, 00:08:17.144 "abort_timeout_sec": 1, 00:08:17.144 "ack_timeout": 0, 00:08:17.144 "data_wr_pool_size": 0 00:08:17.144 } 00:08:17.144 } 00:08:17.144 ] 00:08:17.144 }, 00:08:17.144 { 00:08:17.144 "subsystem": "iscsi", 00:08:17.145 "config": [ 00:08:17.145 { 00:08:17.145 "method": "iscsi_set_options", 00:08:17.145 "params": { 00:08:17.145 "node_base": "iqn.2016-06.io.spdk", 00:08:17.145 "max_sessions": 128, 00:08:17.145 "max_connections_per_session": 2, 00:08:17.145 "max_queue_depth": 64, 00:08:17.145 "default_time2wait": 2, 00:08:17.145 "default_time2retain": 20, 00:08:17.145 "first_burst_length": 8192, 00:08:17.145 "immediate_data": true, 00:08:17.145 "allow_duplicated_isid": false, 00:08:17.145 "error_recovery_level": 0, 00:08:17.145 "nop_timeout": 60, 00:08:17.145 "nop_in_interval": 30, 00:08:17.145 "disable_chap": false, 00:08:17.145 "require_chap": false, 00:08:17.145 "mutual_chap": false, 00:08:17.145 "chap_group": 0, 00:08:17.145 "max_large_datain_per_connection": 64, 00:08:17.145 "max_r2t_per_connection": 4, 00:08:17.145 "pdu_pool_size": 36864, 00:08:17.145 "immediate_data_pool_size": 16384, 00:08:17.145 "data_out_pool_size": 2048 00:08:17.145 } 00:08:17.145 } 00:08:17.145 ] 00:08:17.145 } 00:08:17.145 ] 00:08:17.145 } 00:08:17.145 06:34:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:17.145 06:34:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57537 00:08:17.145 06:34:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57537 ']' 00:08:17.145 06:34:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57537 00:08:17.145 06:34:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:08:17.145 06:34:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:17.145 06:34:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57537 00:08:17.145 06:34:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:17.145 killing process with pid 57537 00:08:17.145 06:34:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:17.145 06:34:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57537' 00:08:17.145 06:34:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57537 00:08:17.145 06:34:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57537 00:08:18.528 06:34:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57576 00:08:18.528 06:34:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:08:18.528 06:34:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:23.794 06:34:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57576 00:08:23.794 06:34:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57576 ']' 00:08:23.794 06:34:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57576 00:08:23.794 06:34:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:08:23.794 06:34:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:23.794 06:34:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57576 00:08:23.794 killing process with pid 57576 00:08:23.794 06:34:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:23.794 06:34:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:23.794 06:34:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57576' 00:08:23.794 06:34:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57576 00:08:23.794 06:34:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57576 00:08:24.728 06:34:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:08:24.728 06:34:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:08:24.728 00:08:24.728 real 0m8.576s 00:08:24.728 user 0m8.249s 00:08:24.728 sys 0m0.587s 00:08:24.728 06:34:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:24.728 06:34:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:24.728 ************************************ 00:08:24.728 END TEST skip_rpc_with_json 00:08:24.728 ************************************ 00:08:24.728 06:34:37 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:08:24.728 06:34:37 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:24.728 06:34:37 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:24.728 06:34:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:24.728 ************************************ 00:08:24.728 START TEST skip_rpc_with_delay 00:08:24.728 ************************************ 00:08:24.728 06:34:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:08:24.728 06:34:37 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:24.728 06:34:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:08:24.728 06:34:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:24.728 06:34:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:24.728 06:34:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:24.728 06:34:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:24.728 06:34:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:24.728 06:34:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:24.728 06:34:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:24.728 06:34:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:24.728 06:34:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:08:24.728 06:34:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:24.728 [2024-12-06 06:34:37.311554] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:08:24.728 06:34:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:08:24.728 06:34:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:24.728 06:34:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:24.728 06:34:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:24.728 00:08:24.728 real 0m0.125s 00:08:24.728 user 0m0.061s 00:08:24.728 sys 0m0.063s 00:08:24.728 06:34:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:24.728 06:34:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:08:24.728 ************************************ 00:08:24.728 END TEST skip_rpc_with_delay 00:08:24.728 ************************************ 00:08:24.728 06:34:37 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:08:24.728 06:34:37 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:08:24.728 06:34:37 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:08:24.728 06:34:37 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:24.728 06:34:37 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:24.728 06:34:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:24.728 ************************************ 00:08:24.728 START TEST exit_on_failed_rpc_init 00:08:24.728 ************************************ 00:08:24.728 06:34:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:08:24.728 06:34:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57693 00:08:24.728 06:34:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57693 00:08:24.728 06:34:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57693 ']' 00:08:24.728 06:34:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:24.728 06:34:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:24.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:24.728 06:34:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:24.728 06:34:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:24.728 06:34:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:24.728 06:34:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:24.986 [2024-12-06 06:34:37.471058] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:08:24.986 [2024-12-06 06:34:37.471184] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57693 ] 00:08:24.986 [2024-12-06 06:34:37.631723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.243 [2024-12-06 06:34:37.735736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.810 06:34:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:25.810 06:34:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:08:25.810 06:34:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:25.810 06:34:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:08:25.810 06:34:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:08:25.810 06:34:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:08:25.810 06:34:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:25.810 06:34:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:25.810 06:34:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:25.810 06:34:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:25.810 06:34:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:25.810 06:34:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:25.810 06:34:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:25.810 06:34:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:08:25.810 06:34:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:08:25.810 [2024-12-06 06:34:38.427924] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:08:25.810 [2024-12-06 06:34:38.428053] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57711 ] 00:08:26.068 [2024-12-06 06:34:38.582964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.068 [2024-12-06 06:34:38.665363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:26.068 [2024-12-06 06:34:38.665437] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:08:26.068 [2024-12-06 06:34:38.665449] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:08:26.068 [2024-12-06 06:34:38.665469] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:26.327 06:34:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:08:26.327 06:34:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:26.327 06:34:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:08:26.327 06:34:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:08:26.327 06:34:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:08:26.327 06:34:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:26.327 06:34:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:26.327 06:34:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57693 00:08:26.327 06:34:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57693 ']' 00:08:26.327 06:34:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57693 00:08:26.327 06:34:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:08:26.327 06:34:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:26.327 06:34:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57693 00:08:26.327 06:34:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:26.327 killing process with pid 57693 00:08:26.327 06:34:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:26.327 06:34:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57693' 00:08:26.327 06:34:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57693 00:08:26.327 06:34:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57693 00:08:27.699 00:08:27.699 real 0m2.954s 00:08:27.699 user 0m3.191s 00:08:27.699 sys 0m0.419s 00:08:27.699 06:34:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:27.699 ************************************ 00:08:27.699 END TEST exit_on_failed_rpc_init 00:08:27.699 ************************************ 00:08:27.699 06:34:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:27.699 06:34:40 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:27.699 00:08:27.699 real 0m18.200s 00:08:27.699 user 0m17.507s 00:08:27.699 sys 0m1.501s 00:08:27.699 06:34:40 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:27.699 06:34:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.699 ************************************ 00:08:27.699 END TEST skip_rpc 00:08:27.699 ************************************ 00:08:27.699 06:34:40 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:08:27.699 06:34:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:27.699 06:34:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:27.699 06:34:40 -- common/autotest_common.sh@10 -- # set +x 00:08:27.699 ************************************ 00:08:27.699 START TEST rpc_client 00:08:27.699 ************************************ 00:08:27.699 06:34:40 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:08:27.958 * Looking for test storage... 00:08:27.958 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:08:27.958 06:34:40 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:27.958 06:34:40 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:08:27.958 06:34:40 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:27.958 06:34:40 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:27.958 06:34:40 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:27.958 06:34:40 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:27.958 06:34:40 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:27.958 06:34:40 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:08:27.958 06:34:40 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:08:27.958 06:34:40 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:08:27.958 06:34:40 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:08:27.958 06:34:40 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:08:27.958 06:34:40 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:08:27.958 06:34:40 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:08:27.958 06:34:40 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:27.958 06:34:40 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:08:27.958 06:34:40 rpc_client -- scripts/common.sh@345 -- # : 1 00:08:27.958 06:34:40 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:27.958 06:34:40 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:27.958 06:34:40 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:08:27.958 06:34:40 rpc_client -- scripts/common.sh@353 -- # local d=1 00:08:27.958 06:34:40 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:27.958 06:34:40 rpc_client -- scripts/common.sh@355 -- # echo 1 00:08:27.958 06:34:40 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:08:27.958 06:34:40 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:08:27.958 06:34:40 rpc_client -- scripts/common.sh@353 -- # local d=2 00:08:27.958 06:34:40 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:27.958 06:34:40 rpc_client -- scripts/common.sh@355 -- # echo 2 00:08:27.958 06:34:40 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:08:27.958 06:34:40 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:27.958 06:34:40 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:27.958 06:34:40 rpc_client -- scripts/common.sh@368 -- # return 0 00:08:27.958 06:34:40 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:27.958 06:34:40 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:27.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.958 --rc genhtml_branch_coverage=1 00:08:27.958 --rc genhtml_function_coverage=1 00:08:27.958 --rc genhtml_legend=1 00:08:27.958 --rc geninfo_all_blocks=1 00:08:27.958 --rc geninfo_unexecuted_blocks=1 00:08:27.958 00:08:27.958 ' 00:08:27.958 06:34:40 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:27.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.958 --rc genhtml_branch_coverage=1 00:08:27.958 --rc genhtml_function_coverage=1 00:08:27.958 --rc genhtml_legend=1 00:08:27.958 --rc geninfo_all_blocks=1 00:08:27.958 --rc geninfo_unexecuted_blocks=1 00:08:27.958 00:08:27.958 ' 00:08:27.958 06:34:40 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:27.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.958 --rc genhtml_branch_coverage=1 00:08:27.958 --rc genhtml_function_coverage=1 00:08:27.958 --rc genhtml_legend=1 00:08:27.958 --rc geninfo_all_blocks=1 00:08:27.958 --rc geninfo_unexecuted_blocks=1 00:08:27.958 00:08:27.958 ' 00:08:27.958 06:34:40 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:27.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.958 --rc genhtml_branch_coverage=1 00:08:27.958 --rc genhtml_function_coverage=1 00:08:27.958 --rc genhtml_legend=1 00:08:27.958 --rc geninfo_all_blocks=1 00:08:27.958 --rc geninfo_unexecuted_blocks=1 00:08:27.958 00:08:27.958 ' 00:08:27.958 06:34:40 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:08:27.958 OK 00:08:27.958 06:34:40 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:08:27.958 00:08:27.958 real 0m0.196s 00:08:27.958 user 0m0.120s 00:08:27.958 sys 0m0.083s 00:08:27.958 06:34:40 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:27.958 06:34:40 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:08:27.958 ************************************ 00:08:27.958 END TEST rpc_client 00:08:27.958 ************************************ 00:08:27.958 06:34:40 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:08:27.958 06:34:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:27.958 06:34:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:27.958 06:34:40 -- common/autotest_common.sh@10 -- # set +x 00:08:27.958 ************************************ 00:08:27.958 START TEST json_config 00:08:27.958 ************************************ 00:08:27.958 06:34:40 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:08:28.217 06:34:40 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:28.217 06:34:40 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:08:28.217 06:34:40 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:28.217 06:34:40 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:28.217 06:34:40 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:28.217 06:34:40 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:28.217 06:34:40 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:28.217 06:34:40 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:08:28.217 06:34:40 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:08:28.217 06:34:40 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:08:28.217 06:34:40 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:08:28.217 06:34:40 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:08:28.217 06:34:40 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:08:28.217 06:34:40 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:08:28.217 06:34:40 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:28.217 06:34:40 json_config -- scripts/common.sh@344 -- # case "$op" in 00:08:28.217 06:34:40 json_config -- scripts/common.sh@345 -- # : 1 00:08:28.217 06:34:40 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:28.217 06:34:40 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:28.217 06:34:40 json_config -- scripts/common.sh@365 -- # decimal 1 00:08:28.217 06:34:40 json_config -- scripts/common.sh@353 -- # local d=1 00:08:28.217 06:34:40 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:28.217 06:34:40 json_config -- scripts/common.sh@355 -- # echo 1 00:08:28.217 06:34:40 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:08:28.217 06:34:40 json_config -- scripts/common.sh@366 -- # decimal 2 00:08:28.217 06:34:40 json_config -- scripts/common.sh@353 -- # local d=2 00:08:28.217 06:34:40 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:28.217 06:34:40 json_config -- scripts/common.sh@355 -- # echo 2 00:08:28.217 06:34:40 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:08:28.217 06:34:40 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:28.217 06:34:40 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:28.217 06:34:40 json_config -- scripts/common.sh@368 -- # return 0 00:08:28.217 06:34:40 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:28.217 06:34:40 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:28.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.217 --rc genhtml_branch_coverage=1 00:08:28.217 --rc genhtml_function_coverage=1 00:08:28.217 --rc genhtml_legend=1 00:08:28.217 --rc geninfo_all_blocks=1 00:08:28.217 --rc geninfo_unexecuted_blocks=1 00:08:28.217 00:08:28.217 ' 00:08:28.217 06:34:40 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:28.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.217 --rc genhtml_branch_coverage=1 00:08:28.217 --rc genhtml_function_coverage=1 00:08:28.217 --rc genhtml_legend=1 00:08:28.217 --rc geninfo_all_blocks=1 00:08:28.217 --rc geninfo_unexecuted_blocks=1 00:08:28.217 00:08:28.217 ' 00:08:28.217 06:34:40 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:28.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.217 --rc genhtml_branch_coverage=1 00:08:28.217 --rc genhtml_function_coverage=1 00:08:28.217 --rc genhtml_legend=1 00:08:28.217 --rc geninfo_all_blocks=1 00:08:28.217 --rc geninfo_unexecuted_blocks=1 00:08:28.217 00:08:28.217 ' 00:08:28.217 06:34:40 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:28.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.217 --rc genhtml_branch_coverage=1 00:08:28.217 --rc genhtml_function_coverage=1 00:08:28.217 --rc genhtml_legend=1 00:08:28.217 --rc geninfo_all_blocks=1 00:08:28.217 --rc geninfo_unexecuted_blocks=1 00:08:28.217 00:08:28.217 ' 00:08:28.217 06:34:40 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:28.217 06:34:40 json_config -- nvmf/common.sh@7 -- # uname -s 00:08:28.217 06:34:40 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:28.217 06:34:40 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:28.217 06:34:40 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:28.217 06:34:40 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:28.217 06:34:40 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:28.217 06:34:40 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:28.217 06:34:40 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:28.217 06:34:40 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:28.217 06:34:40 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:28.217 06:34:40 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:28.217 06:34:40 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b8a95972-adac-4888-bff5-5983b481f9e9 00:08:28.218 06:34:40 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=b8a95972-adac-4888-bff5-5983b481f9e9 00:08:28.218 06:34:40 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:28.218 06:34:40 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:28.218 06:34:40 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:28.218 06:34:40 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:28.218 06:34:40 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:28.218 06:34:40 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:08:28.218 06:34:40 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:28.218 06:34:40 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:28.218 06:34:40 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:28.218 06:34:40 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.218 06:34:40 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.218 06:34:40 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.218 06:34:40 json_config -- paths/export.sh@5 -- # export PATH 00:08:28.218 06:34:40 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.218 06:34:40 json_config -- nvmf/common.sh@51 -- # : 0 00:08:28.218 06:34:40 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:28.218 06:34:40 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:28.218 06:34:40 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:28.218 06:34:40 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:28.218 06:34:40 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:28.218 06:34:40 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:28.218 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:28.218 06:34:40 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:28.218 06:34:40 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:28.218 06:34:40 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:28.218 06:34:40 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:08:28.218 06:34:40 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:08:28.218 06:34:40 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:08:28.218 06:34:40 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:08:28.218 WARNING: No tests are enabled so not running JSON configuration tests 00:08:28.218 06:34:40 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:08:28.218 06:34:40 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:08:28.218 06:34:40 json_config -- json_config/json_config.sh@28 -- # exit 0 00:08:28.218 00:08:28.218 real 0m0.131s 00:08:28.218 user 0m0.085s 00:08:28.218 sys 0m0.050s 00:08:28.218 06:34:40 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:28.218 ************************************ 00:08:28.218 06:34:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:28.218 END TEST json_config 00:08:28.218 ************************************ 00:08:28.218 06:34:40 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:08:28.218 06:34:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:28.218 06:34:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:28.218 06:34:40 -- common/autotest_common.sh@10 -- # set +x 00:08:28.218 ************************************ 00:08:28.218 START TEST json_config_extra_key 00:08:28.218 ************************************ 00:08:28.218 06:34:40 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:08:28.218 06:34:40 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:28.218 06:34:40 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:08:28.218 06:34:40 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:28.218 06:34:40 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:28.218 06:34:40 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:28.218 06:34:40 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:28.218 06:34:40 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:28.218 06:34:40 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:08:28.218 06:34:40 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:08:28.218 06:34:40 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:08:28.218 06:34:40 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:08:28.218 06:34:40 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:08:28.218 06:34:40 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:08:28.218 06:34:40 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:08:28.218 06:34:40 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:28.218 06:34:40 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:08:28.218 06:34:40 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:08:28.218 06:34:40 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:28.218 06:34:40 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:28.218 06:34:40 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:08:28.218 06:34:40 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:08:28.218 06:34:40 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:28.218 06:34:40 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:08:28.218 06:34:40 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:08:28.218 06:34:40 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:08:28.218 06:34:40 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:08:28.218 06:34:40 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:28.218 06:34:40 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:08:28.218 06:34:40 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:08:28.218 06:34:40 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:28.218 06:34:40 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:28.218 06:34:40 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:08:28.218 06:34:40 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:28.218 06:34:40 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:28.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.218 --rc genhtml_branch_coverage=1 00:08:28.218 --rc genhtml_function_coverage=1 00:08:28.218 --rc genhtml_legend=1 00:08:28.218 --rc geninfo_all_blocks=1 00:08:28.218 --rc geninfo_unexecuted_blocks=1 00:08:28.218 00:08:28.218 ' 00:08:28.218 06:34:40 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:28.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.218 --rc genhtml_branch_coverage=1 00:08:28.218 --rc genhtml_function_coverage=1 00:08:28.218 --rc genhtml_legend=1 00:08:28.219 --rc geninfo_all_blocks=1 00:08:28.219 --rc geninfo_unexecuted_blocks=1 00:08:28.219 00:08:28.219 ' 00:08:28.219 06:34:40 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:28.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.219 --rc genhtml_branch_coverage=1 00:08:28.219 --rc genhtml_function_coverage=1 00:08:28.219 --rc genhtml_legend=1 00:08:28.219 --rc geninfo_all_blocks=1 00:08:28.219 --rc geninfo_unexecuted_blocks=1 00:08:28.219 00:08:28.219 ' 00:08:28.219 06:34:40 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:28.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.219 --rc genhtml_branch_coverage=1 00:08:28.219 --rc genhtml_function_coverage=1 00:08:28.219 --rc genhtml_legend=1 00:08:28.219 --rc geninfo_all_blocks=1 00:08:28.219 --rc geninfo_unexecuted_blocks=1 00:08:28.219 00:08:28.219 ' 00:08:28.219 06:34:40 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:28.219 06:34:40 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:08:28.219 06:34:40 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:28.219 06:34:40 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:28.219 06:34:40 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:28.219 06:34:40 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:28.219 06:34:40 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:28.219 06:34:40 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:28.219 06:34:40 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:28.219 06:34:40 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:28.219 06:34:40 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:28.219 06:34:40 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:28.477 06:34:40 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b8a95972-adac-4888-bff5-5983b481f9e9 00:08:28.477 06:34:40 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=b8a95972-adac-4888-bff5-5983b481f9e9 00:08:28.477 06:34:40 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:28.477 06:34:40 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:28.477 06:34:40 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:28.477 06:34:40 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:28.477 06:34:40 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:28.477 06:34:40 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:08:28.477 06:34:40 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:28.477 06:34:40 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:28.477 06:34:40 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:28.477 06:34:40 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.477 06:34:40 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.477 06:34:40 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.477 06:34:40 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:08:28.477 06:34:40 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.477 06:34:40 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:08:28.477 06:34:40 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:28.477 06:34:40 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:28.477 06:34:40 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:28.477 06:34:40 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:28.477 06:34:40 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:28.477 06:34:40 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:28.477 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:28.477 06:34:40 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:28.477 06:34:40 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:28.477 06:34:40 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:28.477 06:34:40 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:08:28.477 06:34:40 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:08:28.477 06:34:40 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:08:28.477 06:34:40 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:08:28.477 06:34:40 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:08:28.477 06:34:40 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:08:28.477 06:34:40 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:08:28.477 06:34:40 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:08:28.477 06:34:40 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:08:28.477 06:34:40 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:28.477 06:34:40 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:08:28.477 INFO: launching applications... 00:08:28.477 06:34:40 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:08:28.477 06:34:40 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:08:28.477 06:34:40 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:08:28.477 06:34:40 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:28.477 06:34:40 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:28.477 06:34:40 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:08:28.477 06:34:40 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:28.477 06:34:40 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:28.477 06:34:40 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57905 00:08:28.477 Waiting for target to run... 00:08:28.477 06:34:40 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:28.477 06:34:40 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57905 /var/tmp/spdk_tgt.sock 00:08:28.478 06:34:40 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57905 ']' 00:08:28.478 06:34:40 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:28.478 06:34:40 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:28.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:28.478 06:34:40 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:28.478 06:34:40 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:28.478 06:34:40 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:28.478 06:34:40 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:08:28.478 [2024-12-06 06:34:41.040528] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:08:28.478 [2024-12-06 06:34:41.040656] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57905 ] 00:08:28.735 [2024-12-06 06:34:41.359654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.735 [2024-12-06 06:34:41.451593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.299 06:34:41 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:29.299 06:34:41 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:08:29.299 00:08:29.299 06:34:41 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:08:29.299 INFO: shutting down applications... 00:08:29.299 06:34:41 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:08:29.299 06:34:41 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:08:29.299 06:34:41 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:08:29.299 06:34:41 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:08:29.299 06:34:41 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57905 ]] 00:08:29.299 06:34:41 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57905 00:08:29.299 06:34:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:08:29.300 06:34:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:29.300 06:34:41 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57905 00:08:29.300 06:34:41 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:29.864 06:34:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:29.864 06:34:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:29.864 06:34:42 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57905 00:08:29.864 06:34:42 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:30.427 06:34:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:30.427 06:34:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:30.427 06:34:42 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57905 00:08:30.427 06:34:42 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:30.992 06:34:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:30.992 06:34:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:30.992 06:34:43 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57905 00:08:30.993 06:34:43 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:31.251 06:34:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:31.251 06:34:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:31.251 06:34:43 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57905 00:08:31.251 06:34:43 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:08:31.251 06:34:43 json_config_extra_key -- json_config/common.sh@43 -- # break 00:08:31.251 SPDK target shutdown done 00:08:31.251 Success 00:08:31.251 06:34:43 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:08:31.251 06:34:43 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:08:31.251 06:34:43 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:08:31.251 00:08:31.251 real 0m3.131s 00:08:31.251 user 0m2.749s 00:08:31.251 sys 0m0.403s 00:08:31.251 ************************************ 00:08:31.251 END TEST json_config_extra_key 00:08:31.251 ************************************ 00:08:31.251 06:34:43 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:31.251 06:34:43 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:31.509 06:34:44 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:31.509 06:34:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:31.509 06:34:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.509 06:34:44 -- common/autotest_common.sh@10 -- # set +x 00:08:31.509 ************************************ 00:08:31.509 START TEST alias_rpc 00:08:31.509 ************************************ 00:08:31.509 06:34:44 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:31.509 * Looking for test storage... 00:08:31.509 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:08:31.509 06:34:44 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:31.509 06:34:44 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:08:31.509 06:34:44 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:31.509 06:34:44 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:31.509 06:34:44 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:31.509 06:34:44 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:31.509 06:34:44 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:31.509 06:34:44 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:31.509 06:34:44 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:31.509 06:34:44 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:31.509 06:34:44 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:31.509 06:34:44 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:31.509 06:34:44 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:31.509 06:34:44 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:31.509 06:34:44 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:31.509 06:34:44 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:31.509 06:34:44 alias_rpc -- scripts/common.sh@345 -- # : 1 00:08:31.509 06:34:44 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:31.509 06:34:44 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:31.509 06:34:44 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:08:31.509 06:34:44 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:08:31.509 06:34:44 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:31.509 06:34:44 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:08:31.509 06:34:44 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:31.509 06:34:44 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:08:31.509 06:34:44 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:08:31.509 06:34:44 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:31.509 06:34:44 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:08:31.509 06:34:44 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:31.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.509 06:34:44 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:31.509 06:34:44 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:31.509 06:34:44 alias_rpc -- scripts/common.sh@368 -- # return 0 00:08:31.509 06:34:44 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:31.509 06:34:44 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:31.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.509 --rc genhtml_branch_coverage=1 00:08:31.509 --rc genhtml_function_coverage=1 00:08:31.509 --rc genhtml_legend=1 00:08:31.509 --rc geninfo_all_blocks=1 00:08:31.509 --rc geninfo_unexecuted_blocks=1 00:08:31.509 00:08:31.509 ' 00:08:31.509 06:34:44 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:31.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.509 --rc genhtml_branch_coverage=1 00:08:31.509 --rc genhtml_function_coverage=1 00:08:31.509 --rc genhtml_legend=1 00:08:31.509 --rc geninfo_all_blocks=1 00:08:31.509 --rc geninfo_unexecuted_blocks=1 00:08:31.509 00:08:31.509 ' 00:08:31.509 06:34:44 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:31.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.509 --rc genhtml_branch_coverage=1 00:08:31.509 --rc genhtml_function_coverage=1 00:08:31.509 --rc genhtml_legend=1 00:08:31.509 --rc geninfo_all_blocks=1 00:08:31.509 --rc geninfo_unexecuted_blocks=1 00:08:31.509 00:08:31.509 ' 00:08:31.509 06:34:44 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:31.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.509 --rc genhtml_branch_coverage=1 00:08:31.509 --rc genhtml_function_coverage=1 00:08:31.509 --rc genhtml_legend=1 00:08:31.509 --rc geninfo_all_blocks=1 00:08:31.509 --rc geninfo_unexecuted_blocks=1 00:08:31.509 00:08:31.509 ' 00:08:31.509 06:34:44 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:31.509 06:34:44 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57998 00:08:31.509 06:34:44 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57998 00:08:31.509 06:34:44 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57998 ']' 00:08:31.509 06:34:44 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.510 06:34:44 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:31.510 06:34:44 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:31.510 06:34:44 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.510 06:34:44 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:31.510 06:34:44 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:31.767 [2024-12-06 06:34:44.258046] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:08:31.767 [2024-12-06 06:34:44.258208] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57998 ] 00:08:31.767 [2024-12-06 06:34:44.418453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.023 [2024-12-06 06:34:44.521386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.585 06:34:45 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:32.585 06:34:45 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:32.585 06:34:45 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:08:32.841 06:34:45 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57998 00:08:32.841 06:34:45 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57998 ']' 00:08:32.841 06:34:45 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57998 00:08:32.841 06:34:45 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:08:32.841 06:34:45 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:32.841 06:34:45 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57998 00:08:32.841 killing process with pid 57998 00:08:32.841 06:34:45 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:32.841 06:34:45 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:32.841 06:34:45 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57998' 00:08:32.841 06:34:45 alias_rpc -- common/autotest_common.sh@973 -- # kill 57998 00:08:32.841 06:34:45 alias_rpc -- common/autotest_common.sh@978 -- # wait 57998 00:08:34.266 00:08:34.266 real 0m2.880s 00:08:34.266 user 0m2.961s 00:08:34.266 sys 0m0.420s 00:08:34.266 ************************************ 00:08:34.266 END TEST alias_rpc 00:08:34.266 ************************************ 00:08:34.266 06:34:46 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:34.266 06:34:46 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:34.266 06:34:46 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:08:34.266 06:34:46 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:08:34.266 06:34:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:34.266 06:34:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:34.266 06:34:46 -- common/autotest_common.sh@10 -- # set +x 00:08:34.266 ************************************ 00:08:34.266 START TEST spdkcli_tcp 00:08:34.266 ************************************ 00:08:34.266 06:34:46 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:08:34.524 * Looking for test storage... 00:08:34.524 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:08:34.524 06:34:47 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:34.524 06:34:47 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:08:34.524 06:34:47 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:34.524 06:34:47 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:34.524 06:34:47 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:34.524 06:34:47 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:34.524 06:34:47 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:34.524 06:34:47 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:08:34.524 06:34:47 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:08:34.524 06:34:47 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:08:34.524 06:34:47 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:08:34.524 06:34:47 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:08:34.524 06:34:47 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:08:34.524 06:34:47 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:08:34.524 06:34:47 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:34.524 06:34:47 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:08:34.524 06:34:47 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:08:34.524 06:34:47 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:34.524 06:34:47 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:34.524 06:34:47 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:08:34.524 06:34:47 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:08:34.524 06:34:47 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:34.524 06:34:47 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:08:34.524 06:34:47 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:08:34.524 06:34:47 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:08:34.524 06:34:47 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:08:34.524 06:34:47 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:34.524 06:34:47 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:08:34.525 06:34:47 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:08:34.525 06:34:47 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:34.525 06:34:47 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:34.525 06:34:47 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:08:34.525 06:34:47 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:34.525 06:34:47 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:34.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.525 --rc genhtml_branch_coverage=1 00:08:34.525 --rc genhtml_function_coverage=1 00:08:34.525 --rc genhtml_legend=1 00:08:34.525 --rc geninfo_all_blocks=1 00:08:34.525 --rc geninfo_unexecuted_blocks=1 00:08:34.525 00:08:34.525 ' 00:08:34.525 06:34:47 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:34.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.525 --rc genhtml_branch_coverage=1 00:08:34.525 --rc genhtml_function_coverage=1 00:08:34.525 --rc genhtml_legend=1 00:08:34.525 --rc geninfo_all_blocks=1 00:08:34.525 --rc geninfo_unexecuted_blocks=1 00:08:34.525 00:08:34.525 ' 00:08:34.525 06:34:47 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:34.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.525 --rc genhtml_branch_coverage=1 00:08:34.525 --rc genhtml_function_coverage=1 00:08:34.525 --rc genhtml_legend=1 00:08:34.525 --rc geninfo_all_blocks=1 00:08:34.525 --rc geninfo_unexecuted_blocks=1 00:08:34.525 00:08:34.525 ' 00:08:34.525 06:34:47 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:34.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.525 --rc genhtml_branch_coverage=1 00:08:34.525 --rc genhtml_function_coverage=1 00:08:34.525 --rc genhtml_legend=1 00:08:34.525 --rc geninfo_all_blocks=1 00:08:34.525 --rc geninfo_unexecuted_blocks=1 00:08:34.525 00:08:34.525 ' 00:08:34.525 06:34:47 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:08:34.525 06:34:47 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:08:34.525 06:34:47 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:08:34.525 06:34:47 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:08:34.525 06:34:47 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:08:34.525 06:34:47 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:08:34.525 06:34:47 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:08:34.525 06:34:47 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:34.525 06:34:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:34.525 06:34:47 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58094 00:08:34.525 06:34:47 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58094 00:08:34.525 06:34:47 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 58094 ']' 00:08:34.525 06:34:47 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:34.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:34.525 06:34:47 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:34.525 06:34:47 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:34.525 06:34:47 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:34.525 06:34:47 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:08:34.525 06:34:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:34.525 [2024-12-06 06:34:47.202034] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:08:34.525 [2024-12-06 06:34:47.202180] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58094 ] 00:08:34.782 [2024-12-06 06:34:47.362546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:34.782 [2024-12-06 06:34:47.466234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:34.782 [2024-12-06 06:34:47.466301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.346 06:34:48 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:35.346 06:34:48 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:08:35.603 06:34:48 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:08:35.603 06:34:48 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58111 00:08:35.603 06:34:48 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:08:35.603 [ 00:08:35.603 "bdev_malloc_delete", 00:08:35.603 "bdev_malloc_create", 00:08:35.603 "bdev_null_resize", 00:08:35.603 "bdev_null_delete", 00:08:35.603 "bdev_null_create", 00:08:35.603 "bdev_nvme_cuse_unregister", 00:08:35.603 "bdev_nvme_cuse_register", 00:08:35.603 "bdev_opal_new_user", 00:08:35.603 "bdev_opal_set_lock_state", 00:08:35.603 "bdev_opal_delete", 00:08:35.603 "bdev_opal_get_info", 00:08:35.603 "bdev_opal_create", 00:08:35.603 "bdev_nvme_opal_revert", 00:08:35.603 "bdev_nvme_opal_init", 00:08:35.603 "bdev_nvme_send_cmd", 00:08:35.603 "bdev_nvme_set_keys", 00:08:35.603 "bdev_nvme_get_path_iostat", 00:08:35.603 "bdev_nvme_get_mdns_discovery_info", 00:08:35.603 "bdev_nvme_stop_mdns_discovery", 00:08:35.603 "bdev_nvme_start_mdns_discovery", 00:08:35.603 "bdev_nvme_set_multipath_policy", 00:08:35.603 "bdev_nvme_set_preferred_path", 00:08:35.603 "bdev_nvme_get_io_paths", 00:08:35.603 "bdev_nvme_remove_error_injection", 00:08:35.603 "bdev_nvme_add_error_injection", 00:08:35.603 "bdev_nvme_get_discovery_info", 00:08:35.604 "bdev_nvme_stop_discovery", 00:08:35.604 "bdev_nvme_start_discovery", 00:08:35.604 "bdev_nvme_get_controller_health_info", 00:08:35.604 "bdev_nvme_disable_controller", 00:08:35.604 "bdev_nvme_enable_controller", 00:08:35.604 "bdev_nvme_reset_controller", 00:08:35.604 "bdev_nvme_get_transport_statistics", 00:08:35.604 "bdev_nvme_apply_firmware", 00:08:35.604 "bdev_nvme_detach_controller", 00:08:35.604 "bdev_nvme_get_controllers", 00:08:35.604 "bdev_nvme_attach_controller", 00:08:35.604 "bdev_nvme_set_hotplug", 00:08:35.604 "bdev_nvme_set_options", 00:08:35.604 "bdev_passthru_delete", 00:08:35.604 "bdev_passthru_create", 00:08:35.604 "bdev_lvol_set_parent_bdev", 00:08:35.604 "bdev_lvol_set_parent", 00:08:35.604 "bdev_lvol_check_shallow_copy", 00:08:35.604 "bdev_lvol_start_shallow_copy", 00:08:35.604 "bdev_lvol_grow_lvstore", 00:08:35.604 "bdev_lvol_get_lvols", 00:08:35.604 "bdev_lvol_get_lvstores", 00:08:35.604 "bdev_lvol_delete", 00:08:35.604 "bdev_lvol_set_read_only", 00:08:35.604 "bdev_lvol_resize", 00:08:35.604 "bdev_lvol_decouple_parent", 00:08:35.604 "bdev_lvol_inflate", 00:08:35.604 "bdev_lvol_rename", 00:08:35.604 "bdev_lvol_clone_bdev", 00:08:35.604 "bdev_lvol_clone", 00:08:35.604 "bdev_lvol_snapshot", 00:08:35.604 "bdev_lvol_create", 00:08:35.604 "bdev_lvol_delete_lvstore", 00:08:35.604 "bdev_lvol_rename_lvstore", 00:08:35.604 "bdev_lvol_create_lvstore", 00:08:35.604 "bdev_raid_set_options", 00:08:35.604 "bdev_raid_remove_base_bdev", 00:08:35.604 "bdev_raid_add_base_bdev", 00:08:35.604 "bdev_raid_delete", 00:08:35.604 "bdev_raid_create", 00:08:35.604 "bdev_raid_get_bdevs", 00:08:35.604 "bdev_error_inject_error", 00:08:35.604 "bdev_error_delete", 00:08:35.604 "bdev_error_create", 00:08:35.604 "bdev_split_delete", 00:08:35.604 "bdev_split_create", 00:08:35.604 "bdev_delay_delete", 00:08:35.604 "bdev_delay_create", 00:08:35.604 "bdev_delay_update_latency", 00:08:35.604 "bdev_zone_block_delete", 00:08:35.604 "bdev_zone_block_create", 00:08:35.604 "blobfs_create", 00:08:35.604 "blobfs_detect", 00:08:35.604 "blobfs_set_cache_size", 00:08:35.604 "bdev_xnvme_delete", 00:08:35.604 "bdev_xnvme_create", 00:08:35.604 "bdev_aio_delete", 00:08:35.604 "bdev_aio_rescan", 00:08:35.604 "bdev_aio_create", 00:08:35.604 "bdev_ftl_set_property", 00:08:35.604 "bdev_ftl_get_properties", 00:08:35.604 "bdev_ftl_get_stats", 00:08:35.604 "bdev_ftl_unmap", 00:08:35.604 "bdev_ftl_unload", 00:08:35.604 "bdev_ftl_delete", 00:08:35.604 "bdev_ftl_load", 00:08:35.604 "bdev_ftl_create", 00:08:35.604 "bdev_virtio_attach_controller", 00:08:35.604 "bdev_virtio_scsi_get_devices", 00:08:35.604 "bdev_virtio_detach_controller", 00:08:35.604 "bdev_virtio_blk_set_hotplug", 00:08:35.604 "bdev_iscsi_delete", 00:08:35.604 "bdev_iscsi_create", 00:08:35.604 "bdev_iscsi_set_options", 00:08:35.604 "accel_error_inject_error", 00:08:35.604 "ioat_scan_accel_module", 00:08:35.604 "dsa_scan_accel_module", 00:08:35.604 "iaa_scan_accel_module", 00:08:35.604 "keyring_file_remove_key", 00:08:35.604 "keyring_file_add_key", 00:08:35.604 "keyring_linux_set_options", 00:08:35.604 "fsdev_aio_delete", 00:08:35.604 "fsdev_aio_create", 00:08:35.604 "iscsi_get_histogram", 00:08:35.604 "iscsi_enable_histogram", 00:08:35.604 "iscsi_set_options", 00:08:35.604 "iscsi_get_auth_groups", 00:08:35.604 "iscsi_auth_group_remove_secret", 00:08:35.604 "iscsi_auth_group_add_secret", 00:08:35.604 "iscsi_delete_auth_group", 00:08:35.604 "iscsi_create_auth_group", 00:08:35.604 "iscsi_set_discovery_auth", 00:08:35.604 "iscsi_get_options", 00:08:35.604 "iscsi_target_node_request_logout", 00:08:35.604 "iscsi_target_node_set_redirect", 00:08:35.604 "iscsi_target_node_set_auth", 00:08:35.604 "iscsi_target_node_add_lun", 00:08:35.604 "iscsi_get_stats", 00:08:35.604 "iscsi_get_connections", 00:08:35.604 "iscsi_portal_group_set_auth", 00:08:35.604 "iscsi_start_portal_group", 00:08:35.604 "iscsi_delete_portal_group", 00:08:35.604 "iscsi_create_portal_group", 00:08:35.604 "iscsi_get_portal_groups", 00:08:35.604 "iscsi_delete_target_node", 00:08:35.604 "iscsi_target_node_remove_pg_ig_maps", 00:08:35.604 "iscsi_target_node_add_pg_ig_maps", 00:08:35.604 "iscsi_create_target_node", 00:08:35.604 "iscsi_get_target_nodes", 00:08:35.604 "iscsi_delete_initiator_group", 00:08:35.604 "iscsi_initiator_group_remove_initiators", 00:08:35.604 "iscsi_initiator_group_add_initiators", 00:08:35.604 "iscsi_create_initiator_group", 00:08:35.604 "iscsi_get_initiator_groups", 00:08:35.604 "nvmf_set_crdt", 00:08:35.604 "nvmf_set_config", 00:08:35.604 "nvmf_set_max_subsystems", 00:08:35.604 "nvmf_stop_mdns_prr", 00:08:35.604 "nvmf_publish_mdns_prr", 00:08:35.604 "nvmf_subsystem_get_listeners", 00:08:35.604 "nvmf_subsystem_get_qpairs", 00:08:35.604 "nvmf_subsystem_get_controllers", 00:08:35.604 "nvmf_get_stats", 00:08:35.604 "nvmf_get_transports", 00:08:35.604 "nvmf_create_transport", 00:08:35.604 "nvmf_get_targets", 00:08:35.604 "nvmf_delete_target", 00:08:35.604 "nvmf_create_target", 00:08:35.604 "nvmf_subsystem_allow_any_host", 00:08:35.604 "nvmf_subsystem_set_keys", 00:08:35.604 "nvmf_subsystem_remove_host", 00:08:35.604 "nvmf_subsystem_add_host", 00:08:35.604 "nvmf_ns_remove_host", 00:08:35.604 "nvmf_ns_add_host", 00:08:35.604 "nvmf_subsystem_remove_ns", 00:08:35.604 "nvmf_subsystem_set_ns_ana_group", 00:08:35.604 "nvmf_subsystem_add_ns", 00:08:35.604 "nvmf_subsystem_listener_set_ana_state", 00:08:35.604 "nvmf_discovery_get_referrals", 00:08:35.604 "nvmf_discovery_remove_referral", 00:08:35.604 "nvmf_discovery_add_referral", 00:08:35.604 "nvmf_subsystem_remove_listener", 00:08:35.604 "nvmf_subsystem_add_listener", 00:08:35.604 "nvmf_delete_subsystem", 00:08:35.604 "nvmf_create_subsystem", 00:08:35.604 "nvmf_get_subsystems", 00:08:35.604 "env_dpdk_get_mem_stats", 00:08:35.604 "nbd_get_disks", 00:08:35.604 "nbd_stop_disk", 00:08:35.604 "nbd_start_disk", 00:08:35.604 "ublk_recover_disk", 00:08:35.604 "ublk_get_disks", 00:08:35.604 "ublk_stop_disk", 00:08:35.604 "ublk_start_disk", 00:08:35.604 "ublk_destroy_target", 00:08:35.604 "ublk_create_target", 00:08:35.604 "virtio_blk_create_transport", 00:08:35.604 "virtio_blk_get_transports", 00:08:35.604 "vhost_controller_set_coalescing", 00:08:35.604 "vhost_get_controllers", 00:08:35.604 "vhost_delete_controller", 00:08:35.604 "vhost_create_blk_controller", 00:08:35.604 "vhost_scsi_controller_remove_target", 00:08:35.604 "vhost_scsi_controller_add_target", 00:08:35.604 "vhost_start_scsi_controller", 00:08:35.604 "vhost_create_scsi_controller", 00:08:35.604 "thread_set_cpumask", 00:08:35.604 "scheduler_set_options", 00:08:35.604 "framework_get_governor", 00:08:35.604 "framework_get_scheduler", 00:08:35.604 "framework_set_scheduler", 00:08:35.604 "framework_get_reactors", 00:08:35.604 "thread_get_io_channels", 00:08:35.604 "thread_get_pollers", 00:08:35.604 "thread_get_stats", 00:08:35.604 "framework_monitor_context_switch", 00:08:35.604 "spdk_kill_instance", 00:08:35.604 "log_enable_timestamps", 00:08:35.604 "log_get_flags", 00:08:35.604 "log_clear_flag", 00:08:35.604 "log_set_flag", 00:08:35.604 "log_get_level", 00:08:35.604 "log_set_level", 00:08:35.604 "log_get_print_level", 00:08:35.604 "log_set_print_level", 00:08:35.604 "framework_enable_cpumask_locks", 00:08:35.604 "framework_disable_cpumask_locks", 00:08:35.604 "framework_wait_init", 00:08:35.604 "framework_start_init", 00:08:35.604 "scsi_get_devices", 00:08:35.604 "bdev_get_histogram", 00:08:35.604 "bdev_enable_histogram", 00:08:35.604 "bdev_set_qos_limit", 00:08:35.604 "bdev_set_qd_sampling_period", 00:08:35.604 "bdev_get_bdevs", 00:08:35.604 "bdev_reset_iostat", 00:08:35.604 "bdev_get_iostat", 00:08:35.604 "bdev_examine", 00:08:35.604 "bdev_wait_for_examine", 00:08:35.604 "bdev_set_options", 00:08:35.604 "accel_get_stats", 00:08:35.604 "accel_set_options", 00:08:35.604 "accel_set_driver", 00:08:35.604 "accel_crypto_key_destroy", 00:08:35.604 "accel_crypto_keys_get", 00:08:35.604 "accel_crypto_key_create", 00:08:35.604 "accel_assign_opc", 00:08:35.604 "accel_get_module_info", 00:08:35.604 "accel_get_opc_assignments", 00:08:35.604 "vmd_rescan", 00:08:35.604 "vmd_remove_device", 00:08:35.604 "vmd_enable", 00:08:35.604 "sock_get_default_impl", 00:08:35.604 "sock_set_default_impl", 00:08:35.604 "sock_impl_set_options", 00:08:35.604 "sock_impl_get_options", 00:08:35.604 "iobuf_get_stats", 00:08:35.604 "iobuf_set_options", 00:08:35.604 "keyring_get_keys", 00:08:35.604 "framework_get_pci_devices", 00:08:35.604 "framework_get_config", 00:08:35.604 "framework_get_subsystems", 00:08:35.604 "fsdev_set_opts", 00:08:35.604 "fsdev_get_opts", 00:08:35.604 "trace_get_info", 00:08:35.604 "trace_get_tpoint_group_mask", 00:08:35.604 "trace_disable_tpoint_group", 00:08:35.604 "trace_enable_tpoint_group", 00:08:35.604 "trace_clear_tpoint_mask", 00:08:35.604 "trace_set_tpoint_mask", 00:08:35.604 "notify_get_notifications", 00:08:35.604 "notify_get_types", 00:08:35.604 "spdk_get_version", 00:08:35.604 "rpc_get_methods" 00:08:35.604 ] 00:08:35.604 06:34:48 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:08:35.604 06:34:48 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:35.604 06:34:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:35.862 06:34:48 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:08:35.862 06:34:48 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58094 00:08:35.862 06:34:48 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 58094 ']' 00:08:35.862 06:34:48 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 58094 00:08:35.862 06:34:48 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:08:35.862 06:34:48 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:35.862 06:34:48 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58094 00:08:35.862 killing process with pid 58094 00:08:35.862 06:34:48 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:35.862 06:34:48 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:35.862 06:34:48 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58094' 00:08:35.862 06:34:48 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 58094 00:08:35.862 06:34:48 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 58094 00:08:37.238 00:08:37.238 real 0m2.938s 00:08:37.238 user 0m5.322s 00:08:37.238 sys 0m0.452s 00:08:37.238 06:34:49 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:37.238 ************************************ 00:08:37.238 END TEST spdkcli_tcp 00:08:37.238 ************************************ 00:08:37.238 06:34:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:37.238 06:34:49 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:37.238 06:34:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:37.238 06:34:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:37.238 06:34:49 -- common/autotest_common.sh@10 -- # set +x 00:08:37.238 ************************************ 00:08:37.238 START TEST dpdk_mem_utility 00:08:37.238 ************************************ 00:08:37.238 06:34:49 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:37.497 * Looking for test storage... 00:08:37.497 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:08:37.497 06:34:50 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:37.497 06:34:50 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:37.497 06:34:50 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:08:37.497 06:34:50 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:37.497 06:34:50 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:37.497 06:34:50 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:37.497 06:34:50 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:37.497 06:34:50 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:08:37.497 06:34:50 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:08:37.497 06:34:50 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:08:37.497 06:34:50 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:08:37.498 06:34:50 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:08:37.498 06:34:50 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:08:37.498 06:34:50 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:08:37.498 06:34:50 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:37.498 06:34:50 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:08:37.498 06:34:50 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:08:37.498 06:34:50 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:37.498 06:34:50 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:37.498 06:34:50 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:08:37.498 06:34:50 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:08:37.498 06:34:50 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:37.498 06:34:50 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:08:37.498 06:34:50 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:08:37.498 06:34:50 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:08:37.498 06:34:50 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:08:37.498 06:34:50 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:37.498 06:34:50 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:08:37.498 06:34:50 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:08:37.498 06:34:50 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:37.498 06:34:50 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:37.498 06:34:50 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:08:37.498 06:34:50 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:37.498 06:34:50 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:37.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.498 --rc genhtml_branch_coverage=1 00:08:37.498 --rc genhtml_function_coverage=1 00:08:37.498 --rc genhtml_legend=1 00:08:37.498 --rc geninfo_all_blocks=1 00:08:37.498 --rc geninfo_unexecuted_blocks=1 00:08:37.498 00:08:37.498 ' 00:08:37.498 06:34:50 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:37.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.498 --rc genhtml_branch_coverage=1 00:08:37.498 --rc genhtml_function_coverage=1 00:08:37.498 --rc genhtml_legend=1 00:08:37.498 --rc geninfo_all_blocks=1 00:08:37.498 --rc geninfo_unexecuted_blocks=1 00:08:37.498 00:08:37.498 ' 00:08:37.498 06:34:50 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:37.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.498 --rc genhtml_branch_coverage=1 00:08:37.498 --rc genhtml_function_coverage=1 00:08:37.498 --rc genhtml_legend=1 00:08:37.498 --rc geninfo_all_blocks=1 00:08:37.498 --rc geninfo_unexecuted_blocks=1 00:08:37.498 00:08:37.498 ' 00:08:37.498 06:34:50 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:37.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.498 --rc genhtml_branch_coverage=1 00:08:37.498 --rc genhtml_function_coverage=1 00:08:37.498 --rc genhtml_legend=1 00:08:37.498 --rc geninfo_all_blocks=1 00:08:37.498 --rc geninfo_unexecuted_blocks=1 00:08:37.498 00:08:37.498 ' 00:08:37.498 06:34:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:08:37.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.498 06:34:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58205 00:08:37.498 06:34:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58205 00:08:37.498 06:34:50 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58205 ']' 00:08:37.498 06:34:50 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.498 06:34:50 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:37.498 06:34:50 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.498 06:34:50 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:37.498 06:34:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:37.498 06:34:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:37.498 [2024-12-06 06:34:50.189508] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:08:37.498 [2024-12-06 06:34:50.189640] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58205 ] 00:08:37.759 [2024-12-06 06:34:50.345086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.759 [2024-12-06 06:34:50.466444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.695 06:34:51 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:38.695 06:34:51 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:08:38.695 06:34:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:08:38.695 06:34:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:08:38.695 06:34:51 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.695 06:34:51 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:38.695 { 00:08:38.695 "filename": "/tmp/spdk_mem_dump.txt" 00:08:38.695 } 00:08:38.695 06:34:51 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.695 06:34:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:08:38.695 DPDK memory size 824.000000 MiB in 1 heap(s) 00:08:38.695 1 heaps totaling size 824.000000 MiB 00:08:38.695 size: 824.000000 MiB heap id: 0 00:08:38.695 end heaps---------- 00:08:38.695 9 mempools totaling size 603.782043 MiB 00:08:38.695 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:08:38.695 size: 158.602051 MiB name: PDU_data_out_Pool 00:08:38.695 size: 100.555481 MiB name: bdev_io_58205 00:08:38.695 size: 50.003479 MiB name: msgpool_58205 00:08:38.695 size: 36.509338 MiB name: fsdev_io_58205 00:08:38.695 size: 21.763794 MiB name: PDU_Pool 00:08:38.695 size: 19.513306 MiB name: SCSI_TASK_Pool 00:08:38.695 size: 4.133484 MiB name: evtpool_58205 00:08:38.695 size: 0.026123 MiB name: Session_Pool 00:08:38.695 end mempools------- 00:08:38.695 6 memzones totaling size 4.142822 MiB 00:08:38.695 size: 1.000366 MiB name: RG_ring_0_58205 00:08:38.695 size: 1.000366 MiB name: RG_ring_1_58205 00:08:38.695 size: 1.000366 MiB name: RG_ring_4_58205 00:08:38.695 size: 1.000366 MiB name: RG_ring_5_58205 00:08:38.695 size: 0.125366 MiB name: RG_ring_2_58205 00:08:38.695 size: 0.015991 MiB name: RG_ring_3_58205 00:08:38.695 end memzones------- 00:08:38.695 06:34:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:08:38.695 heap id: 0 total size: 824.000000 MiB number of busy elements: 324 number of free elements: 18 00:08:38.695 list of free elements. size: 16.779175 MiB 00:08:38.695 element at address: 0x200006400000 with size: 1.995972 MiB 00:08:38.695 element at address: 0x20000a600000 with size: 1.995972 MiB 00:08:38.695 element at address: 0x200003e00000 with size: 1.991028 MiB 00:08:38.695 element at address: 0x200019500040 with size: 0.999939 MiB 00:08:38.695 element at address: 0x200019900040 with size: 0.999939 MiB 00:08:38.695 element at address: 0x200019a00000 with size: 0.999084 MiB 00:08:38.695 element at address: 0x200032600000 with size: 0.994324 MiB 00:08:38.695 element at address: 0x200000400000 with size: 0.992004 MiB 00:08:38.695 element at address: 0x200019200000 with size: 0.959656 MiB 00:08:38.695 element at address: 0x200019d00040 with size: 0.936401 MiB 00:08:38.695 element at address: 0x200000200000 with size: 0.716980 MiB 00:08:38.695 element at address: 0x20001b400000 with size: 0.559753 MiB 00:08:38.695 element at address: 0x200000c00000 with size: 0.489197 MiB 00:08:38.695 element at address: 0x200019600000 with size: 0.487976 MiB 00:08:38.695 element at address: 0x200019e00000 with size: 0.485413 MiB 00:08:38.695 element at address: 0x200012c00000 with size: 0.433228 MiB 00:08:38.695 element at address: 0x200028800000 with size: 0.391418 MiB 00:08:38.695 element at address: 0x200000800000 with size: 0.350891 MiB 00:08:38.695 list of standard malloc elements. size: 199.289917 MiB 00:08:38.695 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:08:38.695 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:08:38.695 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:08:38.695 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:08:38.695 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:08:38.695 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:08:38.695 element at address: 0x200019deff40 with size: 0.062683 MiB 00:08:38.695 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:08:38.695 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:08:38.695 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:08:38.695 element at address: 0x200012bff040 with size: 0.000305 MiB 00:08:38.695 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:08:38.695 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:08:38.695 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:08:38.695 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:08:38.695 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:08:38.695 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:08:38.695 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:08:38.695 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:08:38.695 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:08:38.695 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:08:38.695 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:08:38.695 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:08:38.695 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:08:38.695 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:08:38.695 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:08:38.695 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:08:38.696 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:08:38.696 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:08:38.696 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:08:38.696 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:08:38.696 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:08:38.696 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:08:38.696 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:08:38.696 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:08:38.696 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:08:38.696 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:08:38.696 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:08:38.696 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:08:38.696 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:08:38.696 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:08:38.696 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:08:38.696 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:08:38.696 element at address: 0x200000cff000 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:08:38.696 element at address: 0x200012bff180 with size: 0.000244 MiB 00:08:38.696 element at address: 0x200012bff280 with size: 0.000244 MiB 00:08:38.696 element at address: 0x200012bff380 with size: 0.000244 MiB 00:08:38.696 element at address: 0x200012bff480 with size: 0.000244 MiB 00:08:38.696 element at address: 0x200012bff580 with size: 0.000244 MiB 00:08:38.696 element at address: 0x200012bff680 with size: 0.000244 MiB 00:08:38.696 element at address: 0x200012bff780 with size: 0.000244 MiB 00:08:38.696 element at address: 0x200012bff880 with size: 0.000244 MiB 00:08:38.696 element at address: 0x200012bff980 with size: 0.000244 MiB 00:08:38.696 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:08:38.696 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:08:38.696 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:08:38.696 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:08:38.696 element at address: 0x200012c6ee80 with size: 0.000244 MiB 00:08:38.696 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:08:38.696 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:08:38.696 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:08:38.696 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:08:38.696 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:08:38.696 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:08:38.696 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:08:38.696 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:08:38.696 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:08:38.696 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:08:38.696 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:08:38.696 element at address: 0x200019affc40 with size: 0.000244 MiB 00:08:38.696 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20001b48f4c0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20001b48f5c0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20001b48f6c0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20001b48f7c0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20001b48f8c0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20001b48f9c0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20001b48fac0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20001b48fbc0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:08:38.696 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:08:38.697 element at address: 0x200028864340 with size: 0.000244 MiB 00:08:38.697 element at address: 0x200028864440 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886b100 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886b380 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886b480 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886b580 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886b680 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886b780 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886b880 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886b980 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886be80 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886c080 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886c180 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886c280 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886c380 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886c480 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886c580 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886c680 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886c780 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886c880 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886c980 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886d080 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886d180 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886d280 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886d380 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886d480 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886d580 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886d680 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886d780 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886d880 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886d980 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886da80 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886db80 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886de80 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886df80 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886e080 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886e180 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886e280 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886e380 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886e480 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886e580 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886e680 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886e780 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886e880 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886e980 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886f080 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886f180 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886f280 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886f380 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886f480 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886f580 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886f680 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886f780 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886f880 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886f980 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:08:38.697 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:08:38.697 list of memzone associated elements. size: 607.930908 MiB 00:08:38.697 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:08:38.697 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:08:38.697 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:08:38.697 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:08:38.697 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:08:38.697 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58205_0 00:08:38.697 element at address: 0x200000dff340 with size: 48.003113 MiB 00:08:38.697 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58205_0 00:08:38.697 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:08:38.697 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58205_0 00:08:38.697 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:08:38.698 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:08:38.698 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:08:38.698 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:08:38.698 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:08:38.698 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58205_0 00:08:38.698 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:08:38.698 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58205 00:08:38.698 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:08:38.698 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58205 00:08:38.698 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:08:38.698 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:08:38.698 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:08:38.698 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:08:38.698 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:08:38.698 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:08:38.698 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:08:38.698 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:08:38.698 element at address: 0x200000cff100 with size: 1.000549 MiB 00:08:38.698 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58205 00:08:38.698 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:08:38.698 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58205 00:08:38.698 element at address: 0x200019affd40 with size: 1.000549 MiB 00:08:38.698 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58205 00:08:38.698 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:08:38.698 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58205 00:08:38.698 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:08:38.698 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58205 00:08:38.698 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:08:38.698 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58205 00:08:38.698 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:08:38.698 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:08:38.698 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:08:38.698 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:08:38.698 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:08:38.698 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:08:38.698 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:08:38.698 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58205 00:08:38.698 element at address: 0x20000085df80 with size: 0.125549 MiB 00:08:38.698 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58205 00:08:38.698 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:08:38.698 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:08:38.698 element at address: 0x200028864540 with size: 0.023804 MiB 00:08:38.698 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:08:38.698 element at address: 0x200000859d40 with size: 0.016174 MiB 00:08:38.698 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58205 00:08:38.698 element at address: 0x20002886a6c0 with size: 0.002502 MiB 00:08:38.698 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:08:38.698 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:08:38.698 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58205 00:08:38.698 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:08:38.698 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58205 00:08:38.698 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:08:38.698 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58205 00:08:38.698 element at address: 0x20002886b200 with size: 0.000366 MiB 00:08:38.698 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:08:38.698 06:34:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:08:38.698 06:34:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58205 00:08:38.698 06:34:51 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58205 ']' 00:08:38.698 06:34:51 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58205 00:08:38.698 06:34:51 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:08:38.698 06:34:51 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:38.698 06:34:51 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58205 00:08:38.698 killing process with pid 58205 00:08:38.698 06:34:51 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:38.698 06:34:51 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:38.698 06:34:51 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58205' 00:08:38.698 06:34:51 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58205 00:08:38.698 06:34:51 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58205 00:08:40.071 ************************************ 00:08:40.071 END TEST dpdk_mem_utility 00:08:40.071 ************************************ 00:08:40.071 00:08:40.071 real 0m2.796s 00:08:40.071 user 0m2.753s 00:08:40.071 sys 0m0.466s 00:08:40.071 06:34:52 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:40.071 06:34:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:40.071 06:34:52 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:08:40.071 06:34:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:40.071 06:34:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:40.071 06:34:52 -- common/autotest_common.sh@10 -- # set +x 00:08:40.330 ************************************ 00:08:40.330 START TEST event 00:08:40.330 ************************************ 00:08:40.330 06:34:52 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:08:40.330 * Looking for test storage... 00:08:40.330 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:08:40.330 06:34:52 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:40.330 06:34:52 event -- common/autotest_common.sh@1711 -- # lcov --version 00:08:40.330 06:34:52 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:40.330 06:34:52 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:40.330 06:34:52 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:40.330 06:34:52 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:40.330 06:34:52 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:40.330 06:34:52 event -- scripts/common.sh@336 -- # IFS=.-: 00:08:40.330 06:34:52 event -- scripts/common.sh@336 -- # read -ra ver1 00:08:40.330 06:34:52 event -- scripts/common.sh@337 -- # IFS=.-: 00:08:40.330 06:34:52 event -- scripts/common.sh@337 -- # read -ra ver2 00:08:40.330 06:34:52 event -- scripts/common.sh@338 -- # local 'op=<' 00:08:40.330 06:34:52 event -- scripts/common.sh@340 -- # ver1_l=2 00:08:40.330 06:34:52 event -- scripts/common.sh@341 -- # ver2_l=1 00:08:40.330 06:34:52 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:40.330 06:34:52 event -- scripts/common.sh@344 -- # case "$op" in 00:08:40.330 06:34:52 event -- scripts/common.sh@345 -- # : 1 00:08:40.330 06:34:52 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:40.330 06:34:52 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:40.330 06:34:52 event -- scripts/common.sh@365 -- # decimal 1 00:08:40.330 06:34:52 event -- scripts/common.sh@353 -- # local d=1 00:08:40.330 06:34:52 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:40.330 06:34:52 event -- scripts/common.sh@355 -- # echo 1 00:08:40.330 06:34:52 event -- scripts/common.sh@365 -- # ver1[v]=1 00:08:40.330 06:34:52 event -- scripts/common.sh@366 -- # decimal 2 00:08:40.330 06:34:52 event -- scripts/common.sh@353 -- # local d=2 00:08:40.330 06:34:52 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:40.330 06:34:52 event -- scripts/common.sh@355 -- # echo 2 00:08:40.330 06:34:52 event -- scripts/common.sh@366 -- # ver2[v]=2 00:08:40.330 06:34:52 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:40.330 06:34:52 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:40.330 06:34:52 event -- scripts/common.sh@368 -- # return 0 00:08:40.330 06:34:52 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:40.330 06:34:52 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:40.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.330 --rc genhtml_branch_coverage=1 00:08:40.330 --rc genhtml_function_coverage=1 00:08:40.330 --rc genhtml_legend=1 00:08:40.330 --rc geninfo_all_blocks=1 00:08:40.330 --rc geninfo_unexecuted_blocks=1 00:08:40.330 00:08:40.330 ' 00:08:40.330 06:34:52 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:40.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.330 --rc genhtml_branch_coverage=1 00:08:40.330 --rc genhtml_function_coverage=1 00:08:40.330 --rc genhtml_legend=1 00:08:40.330 --rc geninfo_all_blocks=1 00:08:40.330 --rc geninfo_unexecuted_blocks=1 00:08:40.330 00:08:40.330 ' 00:08:40.330 06:34:52 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:40.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.330 --rc genhtml_branch_coverage=1 00:08:40.330 --rc genhtml_function_coverage=1 00:08:40.330 --rc genhtml_legend=1 00:08:40.330 --rc geninfo_all_blocks=1 00:08:40.330 --rc geninfo_unexecuted_blocks=1 00:08:40.330 00:08:40.330 ' 00:08:40.330 06:34:52 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:40.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.330 --rc genhtml_branch_coverage=1 00:08:40.330 --rc genhtml_function_coverage=1 00:08:40.330 --rc genhtml_legend=1 00:08:40.330 --rc geninfo_all_blocks=1 00:08:40.330 --rc geninfo_unexecuted_blocks=1 00:08:40.330 00:08:40.330 ' 00:08:40.330 06:34:52 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:40.330 06:34:52 event -- bdev/nbd_common.sh@6 -- # set -e 00:08:40.330 06:34:52 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:40.330 06:34:52 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:40.330 06:34:52 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:40.330 06:34:52 event -- common/autotest_common.sh@10 -- # set +x 00:08:40.330 ************************************ 00:08:40.330 START TEST event_perf 00:08:40.330 ************************************ 00:08:40.330 06:34:52 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:40.330 Running I/O for 1 seconds...[2024-12-06 06:34:53.014262] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:08:40.330 [2024-12-06 06:34:53.014469] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58302 ] 00:08:40.589 [2024-12-06 06:34:53.175583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:40.590 [2024-12-06 06:34:53.283078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:40.590 [2024-12-06 06:34:53.283408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:40.590 [2024-12-06 06:34:53.283756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:40.590 Running I/O for 1 seconds...[2024-12-06 06:34:53.283872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.964 00:08:41.964 lcore 0: 196610 00:08:41.964 lcore 1: 196607 00:08:41.964 lcore 2: 196609 00:08:41.964 lcore 3: 196610 00:08:41.964 done. 00:08:41.964 00:08:41.964 real 0m1.470s 00:08:41.964 user 0m4.258s 00:08:41.964 sys 0m0.092s 00:08:41.964 06:34:54 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:41.964 ************************************ 00:08:41.964 END TEST event_perf 00:08:41.964 ************************************ 00:08:41.964 06:34:54 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:08:41.964 06:34:54 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:08:41.964 06:34:54 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:41.964 06:34:54 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:41.964 06:34:54 event -- common/autotest_common.sh@10 -- # set +x 00:08:41.964 ************************************ 00:08:41.964 START TEST event_reactor 00:08:41.964 ************************************ 00:08:41.964 06:34:54 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:08:41.964 [2024-12-06 06:34:54.550301] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:08:41.964 [2024-12-06 06:34:54.550407] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58336 ] 00:08:42.223 [2024-12-06 06:34:54.707987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.223 [2024-12-06 06:34:54.808328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.598 test_start 00:08:43.598 oneshot 00:08:43.598 tick 100 00:08:43.598 tick 100 00:08:43.598 tick 250 00:08:43.598 tick 100 00:08:43.598 tick 100 00:08:43.598 tick 100 00:08:43.598 tick 250 00:08:43.598 tick 500 00:08:43.598 tick 100 00:08:43.598 tick 100 00:08:43.598 tick 250 00:08:43.598 tick 100 00:08:43.598 tick 100 00:08:43.598 test_end 00:08:43.598 ************************************ 00:08:43.598 END TEST event_reactor 00:08:43.598 ************************************ 00:08:43.598 00:08:43.598 real 0m1.444s 00:08:43.598 user 0m1.276s 00:08:43.598 sys 0m0.059s 00:08:43.598 06:34:55 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:43.598 06:34:55 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:08:43.598 06:34:56 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:43.598 06:34:56 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:43.598 06:34:56 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:43.598 06:34:56 event -- common/autotest_common.sh@10 -- # set +x 00:08:43.598 ************************************ 00:08:43.598 START TEST event_reactor_perf 00:08:43.598 ************************************ 00:08:43.598 06:34:56 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:43.598 [2024-12-06 06:34:56.065637] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:08:43.598 [2024-12-06 06:34:56.065928] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58378 ] 00:08:43.598 [2024-12-06 06:34:56.226732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.598 [2024-12-06 06:34:56.329687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.980 test_start 00:08:44.980 test_end 00:08:44.980 Performance: 287213 events per second 00:08:44.980 00:08:44.980 real 0m1.465s 00:08:44.980 user 0m1.280s 00:08:44.980 sys 0m0.074s 00:08:44.981 06:34:57 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:44.981 ************************************ 00:08:44.981 END TEST event_reactor_perf 00:08:44.981 ************************************ 00:08:44.981 06:34:57 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:08:44.981 06:34:57 event -- event/event.sh@49 -- # uname -s 00:08:44.981 06:34:57 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:08:44.981 06:34:57 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:44.981 06:34:57 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:44.981 06:34:57 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:44.981 06:34:57 event -- common/autotest_common.sh@10 -- # set +x 00:08:44.981 ************************************ 00:08:44.981 START TEST event_scheduler 00:08:44.981 ************************************ 00:08:44.981 06:34:57 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:44.981 * Looking for test storage... 00:08:44.981 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:08:44.981 06:34:57 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:44.981 06:34:57 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:44.981 06:34:57 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:08:44.981 06:34:57 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:44.981 06:34:57 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:44.981 06:34:57 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:44.981 06:34:57 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:44.981 06:34:57 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:08:44.981 06:34:57 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:08:44.981 06:34:57 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:08:44.981 06:34:57 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:08:44.981 06:34:57 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:08:44.981 06:34:57 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:08:44.981 06:34:57 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:08:44.981 06:34:57 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:44.981 06:34:57 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:08:44.981 06:34:57 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:08:44.981 06:34:57 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:44.981 06:34:57 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:44.981 06:34:57 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:08:44.981 06:34:57 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:08:44.981 06:34:57 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:44.981 06:34:57 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:08:44.981 06:34:57 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:08:44.981 06:34:57 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:08:44.981 06:34:57 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:08:44.981 06:34:57 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:44.981 06:34:57 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:08:44.981 06:34:57 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:08:44.981 06:34:57 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:44.981 06:34:57 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:44.981 06:34:57 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:08:44.981 06:34:57 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:44.981 06:34:57 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:44.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.981 --rc genhtml_branch_coverage=1 00:08:44.981 --rc genhtml_function_coverage=1 00:08:44.981 --rc genhtml_legend=1 00:08:44.981 --rc geninfo_all_blocks=1 00:08:44.981 --rc geninfo_unexecuted_blocks=1 00:08:44.981 00:08:44.981 ' 00:08:44.981 06:34:57 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:44.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.981 --rc genhtml_branch_coverage=1 00:08:44.981 --rc genhtml_function_coverage=1 00:08:44.981 --rc genhtml_legend=1 00:08:44.981 --rc geninfo_all_blocks=1 00:08:44.981 --rc geninfo_unexecuted_blocks=1 00:08:44.981 00:08:44.981 ' 00:08:44.981 06:34:57 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:44.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.981 --rc genhtml_branch_coverage=1 00:08:44.981 --rc genhtml_function_coverage=1 00:08:44.981 --rc genhtml_legend=1 00:08:44.981 --rc geninfo_all_blocks=1 00:08:44.981 --rc geninfo_unexecuted_blocks=1 00:08:44.981 00:08:44.981 ' 00:08:45.242 06:34:57 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:45.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.242 --rc genhtml_branch_coverage=1 00:08:45.242 --rc genhtml_function_coverage=1 00:08:45.242 --rc genhtml_legend=1 00:08:45.242 --rc geninfo_all_blocks=1 00:08:45.242 --rc geninfo_unexecuted_blocks=1 00:08:45.242 00:08:45.242 ' 00:08:45.242 06:34:57 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:08:45.242 06:34:57 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58448 00:08:45.242 06:34:57 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:08:45.242 06:34:57 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58448 00:08:45.242 06:34:57 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:08:45.242 06:34:57 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58448 ']' 00:08:45.242 06:34:57 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.242 06:34:57 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:45.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.242 06:34:57 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.242 06:34:57 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:45.242 06:34:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:45.242 [2024-12-06 06:34:57.782656] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:08:45.242 [2024-12-06 06:34:57.783104] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58448 ] 00:08:45.242 [2024-12-06 06:34:57.941811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:45.503 [2024-12-06 06:34:58.050325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.503 [2024-12-06 06:34:58.050952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:45.503 [2024-12-06 06:34:58.051307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:45.503 [2024-12-06 06:34:58.051454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:46.074 06:34:58 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:46.074 06:34:58 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:08:46.075 06:34:58 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:08:46.075 06:34:58 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.075 06:34:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:46.075 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:46.075 POWER: Cannot set governor of lcore 0 to userspace 00:08:46.075 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:46.075 POWER: Cannot set governor of lcore 0 to performance 00:08:46.075 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:46.075 POWER: Cannot set governor of lcore 0 to userspace 00:08:46.075 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:46.075 POWER: Cannot set governor of lcore 0 to userspace 00:08:46.075 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:08:46.075 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:08:46.075 POWER: Unable to set Power Management Environment for lcore 0 00:08:46.075 [2024-12-06 06:34:58.645058] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:08:46.075 [2024-12-06 06:34:58.645079] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:08:46.075 [2024-12-06 06:34:58.645089] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:08:46.075 [2024-12-06 06:34:58.645106] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:08:46.075 [2024-12-06 06:34:58.645114] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:08:46.075 [2024-12-06 06:34:58.645123] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:08:46.075 06:34:58 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.075 06:34:58 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:08:46.075 06:34:58 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.075 06:34:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:46.337 [2024-12-06 06:34:58.883000] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:08:46.337 06:34:58 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.337 06:34:58 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:08:46.337 06:34:58 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:46.337 06:34:58 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:46.337 06:34:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:46.337 ************************************ 00:08:46.337 START TEST scheduler_create_thread 00:08:46.337 ************************************ 00:08:46.337 06:34:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:08:46.337 06:34:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:08:46.337 06:34:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.337 06:34:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:46.337 2 00:08:46.337 06:34:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.337 06:34:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:08:46.337 06:34:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.337 06:34:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:46.337 3 00:08:46.337 06:34:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.337 06:34:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:08:46.337 06:34:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.337 06:34:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:46.337 4 00:08:46.337 06:34:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.337 06:34:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:08:46.337 06:34:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.337 06:34:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:46.337 5 00:08:46.337 06:34:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.337 06:34:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:08:46.337 06:34:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.337 06:34:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:46.337 6 00:08:46.337 06:34:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.337 06:34:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:08:46.337 06:34:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.337 06:34:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:46.337 7 00:08:46.337 06:34:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.337 06:34:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:08:46.337 06:34:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.337 06:34:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:46.337 8 00:08:46.337 06:34:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.337 06:34:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:08:46.337 06:34:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.337 06:34:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:46.337 9 00:08:46.337 06:34:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.337 06:34:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:08:46.337 06:34:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.337 06:34:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:46.337 10 00:08:46.337 06:34:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.337 06:34:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:08:46.337 06:34:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.337 06:34:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:46.337 06:34:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.337 06:34:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:08:46.337 06:34:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:08:46.337 06:34:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.337 06:34:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:46.337 06:34:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.338 06:34:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:08:46.338 06:34:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.338 06:34:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:46.338 06:34:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.338 06:34:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:08:46.338 06:34:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:08:46.338 06:34:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.338 06:34:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:47.343 ************************************ 00:08:47.343 END TEST scheduler_create_thread 00:08:47.343 ************************************ 00:08:47.343 06:35:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.343 00:08:47.343 real 0m1.176s 00:08:47.343 user 0m0.015s 00:08:47.343 sys 0m0.005s 00:08:47.343 06:35:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:47.343 06:35:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:47.604 06:35:00 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:08:47.604 06:35:00 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58448 00:08:47.604 06:35:00 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58448 ']' 00:08:47.604 06:35:00 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58448 00:08:47.604 06:35:00 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:08:47.604 06:35:00 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:47.604 06:35:00 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58448 00:08:47.604 06:35:00 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:08:47.604 06:35:00 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:08:47.604 killing process with pid 58448 00:08:47.604 06:35:00 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58448' 00:08:47.604 06:35:00 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58448 00:08:47.604 06:35:00 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58448 00:08:47.866 [2024-12-06 06:35:00.556696] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:08:48.807 ************************************ 00:08:48.807 END TEST event_scheduler 00:08:48.807 ************************************ 00:08:48.807 00:08:48.807 real 0m3.752s 00:08:48.807 user 0m6.147s 00:08:48.807 sys 0m0.361s 00:08:48.807 06:35:01 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:48.807 06:35:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:48.807 06:35:01 event -- event/event.sh@51 -- # modprobe -n nbd 00:08:48.807 06:35:01 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:08:48.807 06:35:01 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:48.807 06:35:01 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:48.807 06:35:01 event -- common/autotest_common.sh@10 -- # set +x 00:08:48.807 ************************************ 00:08:48.807 START TEST app_repeat 00:08:48.807 ************************************ 00:08:48.807 06:35:01 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:08:48.807 06:35:01 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:48.807 06:35:01 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:48.807 06:35:01 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:08:48.807 06:35:01 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:48.807 06:35:01 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:08:48.807 06:35:01 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:08:48.807 06:35:01 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:08:48.807 06:35:01 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58538 00:08:48.807 06:35:01 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:08:48.807 Process app_repeat pid: 58538 00:08:48.807 06:35:01 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58538' 00:08:48.807 06:35:01 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:48.807 spdk_app_start Round 0 00:08:48.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:48.807 06:35:01 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:08:48.807 06:35:01 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:08:48.807 06:35:01 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58538 /var/tmp/spdk-nbd.sock 00:08:48.807 06:35:01 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58538 ']' 00:08:48.807 06:35:01 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:48.807 06:35:01 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:48.807 06:35:01 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:48.807 06:35:01 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:48.807 06:35:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:48.807 [2024-12-06 06:35:01.443820] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:08:48.807 [2024-12-06 06:35:01.444106] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58538 ] 00:08:49.097 [2024-12-06 06:35:01.607207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:49.097 [2024-12-06 06:35:01.743594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:49.097 [2024-12-06 06:35:01.743624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.670 06:35:02 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:49.670 06:35:02 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:49.670 06:35:02 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:49.930 Malloc0 00:08:49.930 06:35:02 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:50.193 Malloc1 00:08:50.193 06:35:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:50.193 06:35:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:50.193 06:35:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:50.193 06:35:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:50.193 06:35:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:50.193 06:35:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:50.193 06:35:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:50.193 06:35:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:50.193 06:35:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:50.193 06:35:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:50.193 06:35:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:50.193 06:35:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:50.193 06:35:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:50.193 06:35:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:50.193 06:35:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:50.193 06:35:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:50.455 /dev/nbd0 00:08:50.455 06:35:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:50.455 06:35:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:50.455 06:35:03 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:50.455 06:35:03 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:50.455 06:35:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:50.455 06:35:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:50.455 06:35:03 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:50.455 06:35:03 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:50.455 06:35:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:50.455 06:35:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:50.455 06:35:03 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:50.455 1+0 records in 00:08:50.455 1+0 records out 00:08:50.455 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000909673 s, 4.5 MB/s 00:08:50.455 06:35:03 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:50.455 06:35:03 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:50.455 06:35:03 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:50.455 06:35:03 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:50.455 06:35:03 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:50.455 06:35:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:50.455 06:35:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:50.455 06:35:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:50.777 /dev/nbd1 00:08:50.777 06:35:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:50.777 06:35:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:50.777 06:35:03 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:50.777 06:35:03 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:50.777 06:35:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:50.777 06:35:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:50.777 06:35:03 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:50.777 06:35:03 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:50.777 06:35:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:50.777 06:35:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:50.777 06:35:03 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:50.777 1+0 records in 00:08:50.777 1+0 records out 00:08:50.777 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000635766 s, 6.4 MB/s 00:08:50.777 06:35:03 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:50.777 06:35:03 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:50.777 06:35:03 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:50.777 06:35:03 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:50.777 06:35:03 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:50.777 06:35:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:50.777 06:35:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:50.777 06:35:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:50.777 06:35:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:50.777 06:35:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:51.040 06:35:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:51.040 { 00:08:51.040 "nbd_device": "/dev/nbd0", 00:08:51.040 "bdev_name": "Malloc0" 00:08:51.040 }, 00:08:51.040 { 00:08:51.040 "nbd_device": "/dev/nbd1", 00:08:51.040 "bdev_name": "Malloc1" 00:08:51.040 } 00:08:51.040 ]' 00:08:51.040 06:35:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:51.040 { 00:08:51.040 "nbd_device": "/dev/nbd0", 00:08:51.040 "bdev_name": "Malloc0" 00:08:51.040 }, 00:08:51.040 { 00:08:51.040 "nbd_device": "/dev/nbd1", 00:08:51.040 "bdev_name": "Malloc1" 00:08:51.040 } 00:08:51.040 ]' 00:08:51.040 06:35:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:51.040 06:35:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:51.040 /dev/nbd1' 00:08:51.040 06:35:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:51.040 06:35:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:51.040 /dev/nbd1' 00:08:51.040 06:35:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:51.040 06:35:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:51.040 06:35:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:51.040 06:35:03 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:51.040 06:35:03 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:51.040 06:35:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:51.040 06:35:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:51.040 06:35:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:51.040 06:35:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:51.040 06:35:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:51.040 06:35:03 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:51.040 256+0 records in 00:08:51.040 256+0 records out 00:08:51.040 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0100234 s, 105 MB/s 00:08:51.040 06:35:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:51.040 06:35:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:51.040 256+0 records in 00:08:51.040 256+0 records out 00:08:51.040 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0194595 s, 53.9 MB/s 00:08:51.040 06:35:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:51.040 06:35:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:51.302 256+0 records in 00:08:51.302 256+0 records out 00:08:51.302 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0631511 s, 16.6 MB/s 00:08:51.302 06:35:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:51.303 06:35:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:51.303 06:35:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:51.303 06:35:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:51.303 06:35:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:51.303 06:35:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:51.303 06:35:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:51.303 06:35:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:51.303 06:35:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:51.303 06:35:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:51.303 06:35:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:51.303 06:35:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:51.303 06:35:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:51.303 06:35:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:51.303 06:35:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:51.303 06:35:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:51.303 06:35:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:51.303 06:35:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:51.303 06:35:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:51.564 06:35:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:51.564 06:35:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:51.564 06:35:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:51.564 06:35:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:51.564 06:35:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:51.564 06:35:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:51.564 06:35:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:51.564 06:35:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:51.564 06:35:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:51.564 06:35:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:51.564 06:35:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:51.564 06:35:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:51.564 06:35:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:51.564 06:35:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:51.564 06:35:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:51.564 06:35:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:51.564 06:35:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:51.564 06:35:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:51.564 06:35:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:51.564 06:35:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:51.564 06:35:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:51.826 06:35:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:51.826 06:35:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:51.826 06:35:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:51.826 06:35:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:51.826 06:35:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:51.826 06:35:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:51.826 06:35:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:51.826 06:35:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:51.826 06:35:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:51.826 06:35:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:51.826 06:35:04 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:51.826 06:35:04 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:51.826 06:35:04 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:52.399 06:35:04 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:52.970 [2024-12-06 06:35:05.655710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:53.231 [2024-12-06 06:35:05.780759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:53.231 [2024-12-06 06:35:05.780911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.231 [2024-12-06 06:35:05.928108] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:53.231 [2024-12-06 06:35:05.929563] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:55.170 spdk_app_start Round 1 00:08:55.170 06:35:07 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:55.170 06:35:07 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:08:55.170 06:35:07 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58538 /var/tmp/spdk-nbd.sock 00:08:55.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:55.170 06:35:07 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58538 ']' 00:08:55.170 06:35:07 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:55.170 06:35:07 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:55.170 06:35:07 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:55.170 06:35:07 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:55.170 06:35:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:55.431 06:35:08 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:55.431 06:35:08 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:55.431 06:35:08 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:55.693 Malloc0 00:08:55.693 06:35:08 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:55.954 Malloc1 00:08:55.954 06:35:08 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:55.954 06:35:08 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:55.954 06:35:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:55.954 06:35:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:55.954 06:35:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:55.954 06:35:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:55.954 06:35:08 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:55.954 06:35:08 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:55.954 06:35:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:55.954 06:35:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:55.954 06:35:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:55.954 06:35:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:55.954 06:35:08 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:55.954 06:35:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:55.954 06:35:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:55.954 06:35:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:56.523 /dev/nbd0 00:08:56.523 06:35:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:56.523 06:35:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:56.523 06:35:09 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:56.523 06:35:09 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:56.523 06:35:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:56.523 06:35:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:56.523 06:35:09 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:56.523 06:35:09 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:56.523 06:35:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:56.523 06:35:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:56.523 06:35:09 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:56.523 1+0 records in 00:08:56.523 1+0 records out 00:08:56.523 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000810324 s, 5.1 MB/s 00:08:56.523 06:35:09 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:56.523 06:35:09 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:56.523 06:35:09 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:56.523 06:35:09 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:56.523 06:35:09 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:56.523 06:35:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:56.523 06:35:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:56.523 06:35:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:56.523 /dev/nbd1 00:08:56.783 06:35:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:56.783 06:35:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:56.783 06:35:09 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:56.783 06:35:09 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:56.783 06:35:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:56.783 06:35:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:56.783 06:35:09 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:56.783 06:35:09 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:56.783 06:35:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:56.783 06:35:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:56.783 06:35:09 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:56.783 1+0 records in 00:08:56.783 1+0 records out 00:08:56.783 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00051746 s, 7.9 MB/s 00:08:56.783 06:35:09 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:56.783 06:35:09 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:56.783 06:35:09 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:56.783 06:35:09 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:56.783 06:35:09 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:56.783 06:35:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:56.783 06:35:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:56.783 06:35:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:56.783 06:35:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:56.783 06:35:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:56.783 06:35:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:56.783 { 00:08:56.783 "nbd_device": "/dev/nbd0", 00:08:56.783 "bdev_name": "Malloc0" 00:08:56.783 }, 00:08:56.783 { 00:08:56.783 "nbd_device": "/dev/nbd1", 00:08:56.783 "bdev_name": "Malloc1" 00:08:56.783 } 00:08:56.783 ]' 00:08:56.783 06:35:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:56.783 { 00:08:56.783 "nbd_device": "/dev/nbd0", 00:08:56.783 "bdev_name": "Malloc0" 00:08:56.783 }, 00:08:56.783 { 00:08:56.783 "nbd_device": "/dev/nbd1", 00:08:56.783 "bdev_name": "Malloc1" 00:08:56.783 } 00:08:56.783 ]' 00:08:56.783 06:35:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:57.045 06:35:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:57.045 /dev/nbd1' 00:08:57.045 06:35:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:57.045 /dev/nbd1' 00:08:57.045 06:35:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:57.045 06:35:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:57.045 06:35:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:57.045 06:35:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:57.045 06:35:09 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:57.045 06:35:09 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:57.045 06:35:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:57.045 06:35:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:57.045 06:35:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:57.045 06:35:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:57.045 06:35:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:57.045 06:35:09 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:57.045 256+0 records in 00:08:57.045 256+0 records out 00:08:57.045 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00751384 s, 140 MB/s 00:08:57.045 06:35:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:57.045 06:35:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:57.045 256+0 records in 00:08:57.045 256+0 records out 00:08:57.045 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0225432 s, 46.5 MB/s 00:08:57.045 06:35:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:57.045 06:35:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:57.045 256+0 records in 00:08:57.045 256+0 records out 00:08:57.045 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0212002 s, 49.5 MB/s 00:08:57.045 06:35:09 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:57.045 06:35:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:57.045 06:35:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:57.045 06:35:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:57.045 06:35:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:57.045 06:35:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:57.045 06:35:09 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:57.045 06:35:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:57.045 06:35:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:57.045 06:35:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:57.045 06:35:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:57.045 06:35:09 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:57.045 06:35:09 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:57.045 06:35:09 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:57.045 06:35:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:57.045 06:35:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:57.045 06:35:09 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:57.045 06:35:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:57.045 06:35:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:57.306 06:35:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:57.306 06:35:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:57.306 06:35:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:57.306 06:35:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:57.306 06:35:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:57.306 06:35:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:57.306 06:35:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:57.306 06:35:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:57.306 06:35:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:57.306 06:35:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:57.565 06:35:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:57.565 06:35:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:57.565 06:35:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:57.565 06:35:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:57.565 06:35:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:57.565 06:35:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:57.565 06:35:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:57.565 06:35:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:57.565 06:35:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:57.565 06:35:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:57.565 06:35:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:57.826 06:35:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:57.826 06:35:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:57.826 06:35:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:57.826 06:35:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:57.826 06:35:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:57.826 06:35:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:57.826 06:35:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:57.826 06:35:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:57.826 06:35:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:57.826 06:35:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:57.826 06:35:10 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:57.826 06:35:10 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:57.826 06:35:10 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:58.086 06:35:10 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:59.027 [2024-12-06 06:35:11.519736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:59.027 [2024-12-06 06:35:11.649421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:59.027 [2024-12-06 06:35:11.649571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.287 [2024-12-06 06:35:11.797886] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:59.287 [2024-12-06 06:35:11.797986] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:01.269 spdk_app_start Round 2 00:09:01.269 06:35:13 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:01.269 06:35:13 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:09:01.269 06:35:13 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58538 /var/tmp/spdk-nbd.sock 00:09:01.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:01.269 06:35:13 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58538 ']' 00:09:01.269 06:35:13 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:01.269 06:35:13 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:01.269 06:35:13 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:01.269 06:35:13 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:01.269 06:35:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:01.269 06:35:13 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:01.269 06:35:13 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:01.269 06:35:13 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:01.529 Malloc0 00:09:01.529 06:35:14 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:01.788 Malloc1 00:09:01.788 06:35:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:01.788 06:35:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:01.788 06:35:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:01.788 06:35:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:01.788 06:35:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:01.788 06:35:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:01.788 06:35:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:01.788 06:35:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:01.788 06:35:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:01.788 06:35:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:01.788 06:35:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:01.788 06:35:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:01.788 06:35:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:01.788 06:35:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:01.788 06:35:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:01.788 06:35:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:02.049 /dev/nbd0 00:09:02.049 06:35:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:02.049 06:35:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:02.049 06:35:14 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:02.049 06:35:14 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:02.049 06:35:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:02.049 06:35:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:02.049 06:35:14 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:02.049 06:35:14 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:02.049 06:35:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:02.049 06:35:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:02.049 06:35:14 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:02.049 1+0 records in 00:09:02.049 1+0 records out 00:09:02.049 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000402521 s, 10.2 MB/s 00:09:02.049 06:35:14 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:02.049 06:35:14 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:02.049 06:35:14 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:02.049 06:35:14 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:02.049 06:35:14 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:02.049 06:35:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:02.049 06:35:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:02.049 06:35:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:02.310 /dev/nbd1 00:09:02.310 06:35:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:02.310 06:35:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:02.310 06:35:14 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:02.310 06:35:14 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:02.310 06:35:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:02.310 06:35:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:02.310 06:35:14 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:02.310 06:35:15 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:02.310 06:35:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:02.310 06:35:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:02.310 06:35:15 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:02.310 1+0 records in 00:09:02.310 1+0 records out 00:09:02.310 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000283436 s, 14.5 MB/s 00:09:02.310 06:35:15 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:02.310 06:35:15 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:02.310 06:35:15 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:02.310 06:35:15 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:02.310 06:35:15 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:02.310 06:35:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:02.310 06:35:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:02.310 06:35:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:02.310 06:35:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:02.310 06:35:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:02.650 06:35:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:02.650 { 00:09:02.650 "nbd_device": "/dev/nbd0", 00:09:02.650 "bdev_name": "Malloc0" 00:09:02.650 }, 00:09:02.650 { 00:09:02.650 "nbd_device": "/dev/nbd1", 00:09:02.650 "bdev_name": "Malloc1" 00:09:02.650 } 00:09:02.650 ]' 00:09:02.650 06:35:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:02.650 06:35:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:02.650 { 00:09:02.650 "nbd_device": "/dev/nbd0", 00:09:02.650 "bdev_name": "Malloc0" 00:09:02.650 }, 00:09:02.650 { 00:09:02.650 "nbd_device": "/dev/nbd1", 00:09:02.650 "bdev_name": "Malloc1" 00:09:02.650 } 00:09:02.650 ]' 00:09:02.650 06:35:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:02.650 /dev/nbd1' 00:09:02.650 06:35:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:02.650 06:35:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:02.650 /dev/nbd1' 00:09:02.650 06:35:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:02.650 06:35:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:02.650 06:35:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:02.650 06:35:15 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:02.650 06:35:15 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:02.650 06:35:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:02.650 06:35:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:02.650 06:35:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:02.650 06:35:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:02.650 06:35:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:02.651 06:35:15 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:02.651 256+0 records in 00:09:02.651 256+0 records out 00:09:02.651 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00661423 s, 159 MB/s 00:09:02.651 06:35:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:02.651 06:35:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:02.651 256+0 records in 00:09:02.651 256+0 records out 00:09:02.651 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0217861 s, 48.1 MB/s 00:09:02.651 06:35:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:02.651 06:35:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:02.651 256+0 records in 00:09:02.651 256+0 records out 00:09:02.651 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0327827 s, 32.0 MB/s 00:09:02.651 06:35:15 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:02.651 06:35:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:02.651 06:35:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:02.651 06:35:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:02.651 06:35:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:02.651 06:35:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:02.651 06:35:15 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:02.651 06:35:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:02.651 06:35:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:02.911 06:35:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:02.911 06:35:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:02.911 06:35:15 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:02.911 06:35:15 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:02.911 06:35:15 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:02.911 06:35:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:02.911 06:35:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:02.911 06:35:15 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:02.911 06:35:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:02.911 06:35:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:02.911 06:35:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:02.911 06:35:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:02.911 06:35:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:02.911 06:35:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:02.911 06:35:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:02.911 06:35:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:02.911 06:35:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:02.911 06:35:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:02.911 06:35:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:02.911 06:35:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:03.172 06:35:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:03.172 06:35:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:03.172 06:35:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:03.172 06:35:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:03.172 06:35:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:03.172 06:35:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:03.172 06:35:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:03.172 06:35:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:03.172 06:35:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:03.172 06:35:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:03.172 06:35:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:03.433 06:35:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:03.433 06:35:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:03.433 06:35:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:03.433 06:35:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:03.433 06:35:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:03.433 06:35:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:03.433 06:35:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:03.433 06:35:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:03.433 06:35:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:03.433 06:35:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:03.433 06:35:16 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:03.433 06:35:16 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:03.433 06:35:16 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:04.006 06:35:16 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:04.579 [2024-12-06 06:35:17.263699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:04.840 [2024-12-06 06:35:17.368072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:04.840 [2024-12-06 06:35:17.368263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.840 [2024-12-06 06:35:17.495993] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:04.840 [2024-12-06 06:35:17.496053] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:06.756 06:35:19 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58538 /var/tmp/spdk-nbd.sock 00:09:06.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:06.756 06:35:19 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58538 ']' 00:09:06.756 06:35:19 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:06.756 06:35:19 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:06.756 06:35:19 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:06.756 06:35:19 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:06.756 06:35:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:07.016 06:35:19 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:07.016 06:35:19 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:07.016 06:35:19 event.app_repeat -- event/event.sh@39 -- # killprocess 58538 00:09:07.016 06:35:19 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58538 ']' 00:09:07.016 06:35:19 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58538 00:09:07.016 06:35:19 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:09:07.016 06:35:19 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:07.016 06:35:19 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58538 00:09:07.277 killing process with pid 58538 00:09:07.277 06:35:19 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:07.277 06:35:19 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:07.277 06:35:19 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58538' 00:09:07.277 06:35:19 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58538 00:09:07.277 06:35:19 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58538 00:09:07.848 spdk_app_start is called in Round 0. 00:09:07.848 Shutdown signal received, stop current app iteration 00:09:07.848 Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 reinitialization... 00:09:07.848 spdk_app_start is called in Round 1. 00:09:07.848 Shutdown signal received, stop current app iteration 00:09:07.848 Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 reinitialization... 00:09:07.848 spdk_app_start is called in Round 2. 00:09:07.848 Shutdown signal received, stop current app iteration 00:09:07.848 Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 reinitialization... 00:09:07.848 spdk_app_start is called in Round 3. 00:09:07.848 Shutdown signal received, stop current app iteration 00:09:07.848 06:35:20 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:09:07.848 06:35:20 event.app_repeat -- event/event.sh@42 -- # return 0 00:09:07.848 00:09:07.848 real 0m19.123s 00:09:07.848 user 0m41.507s 00:09:07.848 sys 0m2.638s 00:09:07.848 06:35:20 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:07.848 06:35:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:07.848 ************************************ 00:09:07.848 END TEST app_repeat 00:09:07.848 ************************************ 00:09:07.848 06:35:20 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:09:07.848 06:35:20 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:07.848 06:35:20 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:07.848 06:35:20 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:07.848 06:35:20 event -- common/autotest_common.sh@10 -- # set +x 00:09:08.109 ************************************ 00:09:08.109 START TEST cpu_locks 00:09:08.109 ************************************ 00:09:08.109 06:35:20 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:08.109 * Looking for test storage... 00:09:08.109 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:09:08.109 06:35:20 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:08.109 06:35:20 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:09:08.109 06:35:20 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:08.109 06:35:20 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:08.109 06:35:20 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:08.109 06:35:20 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:08.109 06:35:20 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:08.109 06:35:20 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:09:08.109 06:35:20 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:09:08.109 06:35:20 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:09:08.109 06:35:20 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:09:08.109 06:35:20 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:09:08.109 06:35:20 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:09:08.109 06:35:20 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:09:08.109 06:35:20 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:08.109 06:35:20 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:09:08.109 06:35:20 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:09:08.109 06:35:20 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:08.109 06:35:20 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:08.109 06:35:20 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:09:08.109 06:35:20 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:09:08.109 06:35:20 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:08.109 06:35:20 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:09:08.109 06:35:20 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:09:08.109 06:35:20 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:09:08.109 06:35:20 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:09:08.109 06:35:20 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:08.109 06:35:20 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:09:08.109 06:35:20 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:09:08.109 06:35:20 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:08.109 06:35:20 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:08.109 06:35:20 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:09:08.109 06:35:20 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:08.109 06:35:20 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:08.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.109 --rc genhtml_branch_coverage=1 00:09:08.109 --rc genhtml_function_coverage=1 00:09:08.109 --rc genhtml_legend=1 00:09:08.109 --rc geninfo_all_blocks=1 00:09:08.109 --rc geninfo_unexecuted_blocks=1 00:09:08.109 00:09:08.109 ' 00:09:08.109 06:35:20 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:08.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.109 --rc genhtml_branch_coverage=1 00:09:08.109 --rc genhtml_function_coverage=1 00:09:08.109 --rc genhtml_legend=1 00:09:08.109 --rc geninfo_all_blocks=1 00:09:08.109 --rc geninfo_unexecuted_blocks=1 00:09:08.109 00:09:08.109 ' 00:09:08.109 06:35:20 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:08.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.109 --rc genhtml_branch_coverage=1 00:09:08.109 --rc genhtml_function_coverage=1 00:09:08.109 --rc genhtml_legend=1 00:09:08.109 --rc geninfo_all_blocks=1 00:09:08.109 --rc geninfo_unexecuted_blocks=1 00:09:08.109 00:09:08.109 ' 00:09:08.109 06:35:20 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:08.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.109 --rc genhtml_branch_coverage=1 00:09:08.109 --rc genhtml_function_coverage=1 00:09:08.109 --rc genhtml_legend=1 00:09:08.109 --rc geninfo_all_blocks=1 00:09:08.109 --rc geninfo_unexecuted_blocks=1 00:09:08.109 00:09:08.109 ' 00:09:08.109 06:35:20 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:09:08.109 06:35:20 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:09:08.109 06:35:20 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:09:08.109 06:35:20 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:09:08.109 06:35:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:08.109 06:35:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:08.109 06:35:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:08.109 ************************************ 00:09:08.109 START TEST default_locks 00:09:08.109 ************************************ 00:09:08.109 06:35:20 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:09:08.109 06:35:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58985 00:09:08.109 06:35:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58985 00:09:08.109 06:35:20 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58985 ']' 00:09:08.109 06:35:20 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.109 06:35:20 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:08.109 06:35:20 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.109 06:35:20 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:08.109 06:35:20 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:08.109 06:35:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:08.369 [2024-12-06 06:35:20.854428] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:09:08.369 [2024-12-06 06:35:20.854610] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58985 ] 00:09:08.369 [2024-12-06 06:35:21.019518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.631 [2024-12-06 06:35:21.155383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.203 06:35:21 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:09.203 06:35:21 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:09:09.203 06:35:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58985 00:09:09.203 06:35:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58985 00:09:09.203 06:35:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:09.463 06:35:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58985 00:09:09.463 06:35:22 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58985 ']' 00:09:09.463 06:35:22 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58985 00:09:09.463 06:35:22 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:09:09.463 06:35:22 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:09.463 06:35:22 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58985 00:09:09.463 06:35:22 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:09.463 06:35:22 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:09.463 killing process with pid 58985 00:09:09.463 06:35:22 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58985' 00:09:09.463 06:35:22 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58985 00:09:09.463 06:35:22 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58985 00:09:11.368 06:35:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58985 00:09:11.368 06:35:23 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:09:11.368 06:35:23 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58985 00:09:11.368 06:35:23 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:11.368 06:35:23 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:11.368 06:35:23 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:11.368 06:35:23 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:11.368 06:35:23 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58985 00:09:11.368 06:35:23 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58985 ']' 00:09:11.368 06:35:23 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:11.368 06:35:23 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:11.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:11.368 06:35:23 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:11.368 06:35:23 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:11.368 06:35:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:11.368 ERROR: process (pid: 58985) is no longer running 00:09:11.368 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58985) - No such process 00:09:11.368 06:35:23 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:11.368 06:35:23 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:09:11.368 06:35:23 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:09:11.368 06:35:23 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:11.368 06:35:23 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:11.368 06:35:23 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:11.368 06:35:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:09:11.368 06:35:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:11.368 ************************************ 00:09:11.368 END TEST default_locks 00:09:11.368 ************************************ 00:09:11.368 06:35:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:09:11.368 06:35:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:11.368 00:09:11.368 real 0m3.073s 00:09:11.368 user 0m3.027s 00:09:11.368 sys 0m0.625s 00:09:11.368 06:35:23 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:11.368 06:35:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:11.368 06:35:23 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:09:11.368 06:35:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:11.368 06:35:23 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:11.368 06:35:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:11.368 ************************************ 00:09:11.368 START TEST default_locks_via_rpc 00:09:11.368 ************************************ 00:09:11.368 06:35:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:09:11.368 06:35:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59049 00:09:11.368 06:35:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59049 00:09:11.368 06:35:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59049 ']' 00:09:11.368 06:35:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:11.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:11.368 06:35:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:11.368 06:35:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:11.368 06:35:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:11.368 06:35:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:11.368 06:35:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.368 [2024-12-06 06:35:23.964154] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:09:11.368 [2024-12-06 06:35:23.964279] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59049 ] 00:09:11.626 [2024-12-06 06:35:24.125318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.626 [2024-12-06 06:35:24.228194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.196 06:35:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:12.196 06:35:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:12.196 06:35:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:09:12.196 06:35:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.196 06:35:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:12.196 06:35:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.196 06:35:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:09:12.196 06:35:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:12.196 06:35:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:09:12.196 06:35:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:12.196 06:35:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:09:12.196 06:35:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.196 06:35:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:12.196 06:35:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.196 06:35:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59049 00:09:12.196 06:35:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59049 00:09:12.196 06:35:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:12.473 06:35:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59049 00:09:12.473 06:35:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 59049 ']' 00:09:12.473 06:35:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 59049 00:09:12.473 06:35:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:09:12.473 06:35:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:12.473 06:35:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59049 00:09:12.473 killing process with pid 59049 00:09:12.473 06:35:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:12.473 06:35:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:12.473 06:35:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59049' 00:09:12.473 06:35:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 59049 00:09:12.473 06:35:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 59049 00:09:13.904 00:09:13.905 real 0m2.697s 00:09:13.905 user 0m2.710s 00:09:13.905 sys 0m0.427s 00:09:13.905 06:35:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:13.905 06:35:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:13.905 ************************************ 00:09:13.905 END TEST default_locks_via_rpc 00:09:13.905 ************************************ 00:09:13.905 06:35:26 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:09:13.905 06:35:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:13.905 06:35:26 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:13.905 06:35:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:14.162 ************************************ 00:09:14.162 START TEST non_locking_app_on_locked_coremask 00:09:14.162 ************************************ 00:09:14.162 06:35:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:09:14.162 06:35:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59107 00:09:14.162 06:35:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59107 /var/tmp/spdk.sock 00:09:14.162 06:35:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59107 ']' 00:09:14.162 06:35:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.162 06:35:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:14.162 06:35:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.162 06:35:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:14.162 06:35:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:14.162 06:35:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:14.162 [2024-12-06 06:35:26.726399] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:09:14.162 [2024-12-06 06:35:26.726545] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59107 ] 00:09:14.162 [2024-12-06 06:35:26.885823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.420 [2024-12-06 06:35:26.989617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.984 06:35:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:14.984 06:35:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:14.984 06:35:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59117 00:09:14.984 06:35:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59117 /var/tmp/spdk2.sock 00:09:14.984 06:35:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59117 ']' 00:09:14.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:14.984 06:35:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:14.984 06:35:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:09:14.984 06:35:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:14.984 06:35:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:14.984 06:35:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:14.984 06:35:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:14.985 [2024-12-06 06:35:27.671787] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:09:14.985 [2024-12-06 06:35:27.671909] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59117 ] 00:09:15.242 [2024-12-06 06:35:27.844009] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:15.242 [2024-12-06 06:35:27.844076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.499 [2024-12-06 06:35:28.060834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.542 06:35:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:16.542 06:35:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:16.542 06:35:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59107 00:09:16.542 06:35:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59107 00:09:16.542 06:35:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:16.800 06:35:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59107 00:09:16.800 06:35:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59107 ']' 00:09:16.800 06:35:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59107 00:09:16.800 06:35:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:16.800 06:35:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:16.800 06:35:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59107 00:09:16.800 killing process with pid 59107 00:09:16.800 06:35:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:16.800 06:35:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:16.800 06:35:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59107' 00:09:16.800 06:35:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59107 00:09:16.800 06:35:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59107 00:09:21.084 06:35:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59117 00:09:21.084 06:35:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59117 ']' 00:09:21.084 06:35:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59117 00:09:21.084 06:35:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:21.084 06:35:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:21.084 06:35:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59117 00:09:21.084 06:35:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:21.084 killing process with pid 59117 00:09:21.084 06:35:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:21.084 06:35:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59117' 00:09:21.084 06:35:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59117 00:09:21.084 06:35:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59117 00:09:22.020 00:09:22.020 real 0m8.075s 00:09:22.020 user 0m8.287s 00:09:22.020 sys 0m0.888s 00:09:22.020 06:35:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:22.020 ************************************ 00:09:22.020 END TEST non_locking_app_on_locked_coremask 00:09:22.020 ************************************ 00:09:22.020 06:35:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:22.279 06:35:34 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:09:22.279 06:35:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:22.279 06:35:34 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:22.279 06:35:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:22.279 ************************************ 00:09:22.279 START TEST locking_app_on_unlocked_coremask 00:09:22.279 ************************************ 00:09:22.279 06:35:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:09:22.279 06:35:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59235 00:09:22.279 06:35:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59235 /var/tmp/spdk.sock 00:09:22.279 06:35:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:09:22.279 06:35:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59235 ']' 00:09:22.279 06:35:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:22.279 06:35:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:22.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:22.279 06:35:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:22.279 06:35:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:22.279 06:35:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:22.279 [2024-12-06 06:35:34.871770] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:09:22.279 [2024-12-06 06:35:34.871905] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59235 ] 00:09:22.540 [2024-12-06 06:35:35.035175] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:22.540 [2024-12-06 06:35:35.035241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.540 [2024-12-06 06:35:35.150833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.111 06:35:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:23.111 06:35:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:23.111 06:35:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59246 00:09:23.112 06:35:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59246 /var/tmp/spdk2.sock 00:09:23.112 06:35:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59246 ']' 00:09:23.112 06:35:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:23.112 06:35:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:23.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:23.112 06:35:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:23.112 06:35:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:23.112 06:35:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:23.112 06:35:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:23.112 [2024-12-06 06:35:35.846544] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:09:23.112 [2024-12-06 06:35:35.846672] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59246 ] 00:09:23.373 [2024-12-06 06:35:36.026400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.634 [2024-12-06 06:35:36.288819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.014 06:35:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:25.014 06:35:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:25.014 06:35:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59246 00:09:25.014 06:35:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59246 00:09:25.014 06:35:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:25.272 06:35:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59235 00:09:25.272 06:35:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59235 ']' 00:09:25.272 06:35:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59235 00:09:25.272 06:35:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:25.272 06:35:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:25.272 06:35:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59235 00:09:25.272 06:35:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:25.272 killing process with pid 59235 00:09:25.272 06:35:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:25.272 06:35:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59235' 00:09:25.272 06:35:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59235 00:09:25.272 06:35:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59235 00:09:29.474 06:35:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59246 00:09:29.474 06:35:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59246 ']' 00:09:29.474 06:35:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59246 00:09:29.474 06:35:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:29.474 06:35:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:29.474 06:35:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59246 00:09:29.474 killing process with pid 59246 00:09:29.474 06:35:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:29.474 06:35:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:29.474 06:35:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59246' 00:09:29.474 06:35:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59246 00:09:29.474 06:35:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59246 00:09:30.858 00:09:30.858 real 0m8.418s 00:09:30.858 user 0m8.594s 00:09:30.858 sys 0m0.924s 00:09:30.858 06:35:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:30.858 06:35:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:30.858 ************************************ 00:09:30.858 END TEST locking_app_on_unlocked_coremask 00:09:30.858 ************************************ 00:09:30.858 06:35:43 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:09:30.858 06:35:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:30.858 06:35:43 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:30.858 06:35:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:30.858 ************************************ 00:09:30.858 START TEST locking_app_on_locked_coremask 00:09:30.858 ************************************ 00:09:30.858 06:35:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:09:30.858 06:35:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59365 00:09:30.858 06:35:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59365 /var/tmp/spdk.sock 00:09:30.858 06:35:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59365 ']' 00:09:30.858 06:35:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.858 06:35:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:30.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.858 06:35:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.858 06:35:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:30.858 06:35:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:30.858 06:35:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:30.859 [2024-12-06 06:35:43.376778] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:09:30.859 [2024-12-06 06:35:43.376944] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59365 ] 00:09:30.859 [2024-12-06 06:35:43.542445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.193 [2024-12-06 06:35:43.681041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.768 06:35:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:31.768 06:35:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:31.768 06:35:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59381 00:09:31.768 06:35:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59381 /var/tmp/spdk2.sock 00:09:31.768 06:35:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:09:31.768 06:35:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59381 /var/tmp/spdk2.sock 00:09:31.768 06:35:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:31.768 06:35:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:31.768 06:35:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:31.768 06:35:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:31.768 06:35:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:31.768 06:35:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59381 /var/tmp/spdk2.sock 00:09:31.768 06:35:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59381 ']' 00:09:31.768 06:35:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:31.768 06:35:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:31.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:31.768 06:35:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:31.768 06:35:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:31.768 06:35:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:32.031 [2024-12-06 06:35:44.516757] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:09:32.031 [2024-12-06 06:35:44.516914] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59381 ] 00:09:32.031 [2024-12-06 06:35:44.702543] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59365 has claimed it. 00:09:32.031 [2024-12-06 06:35:44.702636] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:32.602 ERROR: process (pid: 59381) is no longer running 00:09:32.602 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59381) - No such process 00:09:32.602 06:35:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:32.602 06:35:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:09:32.602 06:35:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:09:32.602 06:35:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:32.602 06:35:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:32.602 06:35:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:32.602 06:35:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59365 00:09:32.602 06:35:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:32.603 06:35:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59365 00:09:32.864 06:35:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59365 00:09:32.864 06:35:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59365 ']' 00:09:32.864 06:35:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59365 00:09:32.864 06:35:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:32.864 06:35:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:32.864 06:35:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59365 00:09:32.864 06:35:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:32.864 06:35:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:32.864 killing process with pid 59365 00:09:32.864 06:35:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59365' 00:09:32.864 06:35:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59365 00:09:32.864 06:35:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59365 00:09:34.781 00:09:34.781 real 0m3.946s 00:09:34.781 user 0m4.047s 00:09:34.781 sys 0m0.772s 00:09:34.781 06:35:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:34.781 ************************************ 00:09:34.781 END TEST locking_app_on_locked_coremask 00:09:34.781 ************************************ 00:09:34.781 06:35:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:34.781 06:35:47 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:09:34.781 06:35:47 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:34.781 06:35:47 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:34.781 06:35:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:34.781 ************************************ 00:09:34.781 START TEST locking_overlapped_coremask 00:09:34.781 ************************************ 00:09:34.781 06:35:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:09:34.781 06:35:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59445 00:09:34.781 06:35:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59445 /var/tmp/spdk.sock 00:09:34.781 06:35:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59445 ']' 00:09:34.781 06:35:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.781 06:35:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:34.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.781 06:35:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.781 06:35:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:34.781 06:35:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:34.781 06:35:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:09:34.781 [2024-12-06 06:35:47.405715] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:09:34.781 [2024-12-06 06:35:47.405933] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59445 ] 00:09:35.043 [2024-12-06 06:35:47.578857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:35.043 [2024-12-06 06:35:47.725046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:35.043 [2024-12-06 06:35:47.725211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:35.043 [2024-12-06 06:35:47.725520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.001 06:35:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:36.001 06:35:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:36.001 06:35:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59463 00:09:36.001 06:35:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59463 /var/tmp/spdk2.sock 00:09:36.001 06:35:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:09:36.001 06:35:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59463 /var/tmp/spdk2.sock 00:09:36.001 06:35:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:36.001 06:35:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:09:36.001 06:35:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:36.001 06:35:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:36.001 06:35:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:36.001 06:35:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59463 /var/tmp/spdk2.sock 00:09:36.001 06:35:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59463 ']' 00:09:36.001 06:35:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:36.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:36.001 06:35:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:36.001 06:35:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:36.001 06:35:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:36.001 06:35:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:36.001 [2024-12-06 06:35:48.572634] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:09:36.001 [2024-12-06 06:35:48.572788] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59463 ] 00:09:36.263 [2024-12-06 06:35:48.756062] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59445 has claimed it. 00:09:36.263 [2024-12-06 06:35:48.756140] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:36.834 ERROR: process (pid: 59463) is no longer running 00:09:36.834 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59463) - No such process 00:09:36.834 06:35:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:36.834 06:35:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:09:36.834 06:35:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:09:36.834 06:35:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:36.834 06:35:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:36.834 06:35:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:36.834 06:35:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:09:36.834 06:35:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:36.834 06:35:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:36.834 06:35:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:36.834 06:35:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59445 00:09:36.834 06:35:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59445 ']' 00:09:36.834 06:35:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59445 00:09:36.834 06:35:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:09:36.834 06:35:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:36.834 06:35:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59445 00:09:36.834 killing process with pid 59445 00:09:36.834 06:35:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:36.834 06:35:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:36.834 06:35:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59445' 00:09:36.834 06:35:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59445 00:09:36.834 06:35:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59445 00:09:38.742 00:09:38.742 real 0m3.757s 00:09:38.742 user 0m10.080s 00:09:38.742 sys 0m0.658s 00:09:38.742 06:35:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:38.742 06:35:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:38.742 ************************************ 00:09:38.742 END TEST locking_overlapped_coremask 00:09:38.742 ************************************ 00:09:38.742 06:35:51 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:09:38.742 06:35:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:38.742 06:35:51 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:38.742 06:35:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:38.742 ************************************ 00:09:38.742 START TEST locking_overlapped_coremask_via_rpc 00:09:38.742 ************************************ 00:09:38.742 06:35:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:09:38.742 06:35:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59521 00:09:38.742 06:35:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59521 /var/tmp/spdk.sock 00:09:38.742 06:35:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59521 ']' 00:09:38.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.742 06:35:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:09:38.742 06:35:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.742 06:35:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:38.742 06:35:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.742 06:35:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:38.742 06:35:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:38.742 [2024-12-06 06:35:51.234186] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:09:38.742 [2024-12-06 06:35:51.234346] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59521 ] 00:09:38.742 [2024-12-06 06:35:51.400690] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:38.742 [2024-12-06 06:35:51.400760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:39.004 [2024-12-06 06:35:51.552777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:39.004 [2024-12-06 06:35:51.553132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:39.004 [2024-12-06 06:35:51.553329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.947 06:35:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:39.947 06:35:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:39.947 06:35:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59545 00:09:39.947 06:35:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59545 /var/tmp/spdk2.sock 00:09:39.947 06:35:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59545 ']' 00:09:39.947 06:35:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:39.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:39.947 06:35:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:39.947 06:35:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:39.947 06:35:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:39.947 06:35:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:09:39.947 06:35:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.947 [2024-12-06 06:35:52.532027] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:09:39.947 [2024-12-06 06:35:52.532194] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59545 ] 00:09:40.208 [2024-12-06 06:35:52.721626] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:40.208 [2024-12-06 06:35:52.721716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:40.470 [2024-12-06 06:35:53.032915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:40.470 [2024-12-06 06:35:53.033137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:40.470 [2024-12-06 06:35:53.033186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:43.014 06:35:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:43.014 06:35:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:43.014 06:35:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:09:43.014 06:35:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.014 06:35:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:43.014 06:35:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.014 06:35:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:43.014 06:35:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:09:43.014 06:35:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:43.014 06:35:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:43.014 06:35:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:43.014 06:35:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:43.014 06:35:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:43.014 06:35:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:43.014 06:35:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.014 06:35:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:43.014 [2024-12-06 06:35:55.165717] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59521 has claimed it. 00:09:43.014 request: 00:09:43.014 { 00:09:43.014 "method": "framework_enable_cpumask_locks", 00:09:43.014 "req_id": 1 00:09:43.014 } 00:09:43.014 Got JSON-RPC error response 00:09:43.014 response: 00:09:43.014 { 00:09:43.014 "code": -32603, 00:09:43.014 "message": "Failed to claim CPU core: 2" 00:09:43.014 } 00:09:43.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:43.014 06:35:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:43.014 06:35:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:09:43.014 06:35:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:43.014 06:35:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:43.014 06:35:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:43.014 06:35:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59521 /var/tmp/spdk.sock 00:09:43.015 06:35:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59521 ']' 00:09:43.015 06:35:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:43.015 06:35:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:43.015 06:35:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:43.015 06:35:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:43.015 06:35:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:43.015 06:35:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:43.015 06:35:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:43.015 06:35:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59545 /var/tmp/spdk2.sock 00:09:43.015 06:35:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59545 ']' 00:09:43.015 06:35:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:43.015 06:35:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:43.015 06:35:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:43.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:43.015 06:35:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:43.015 06:35:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:43.015 06:35:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:43.015 06:35:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:43.015 06:35:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:09:43.015 06:35:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:43.015 06:35:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:43.015 06:35:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:43.015 00:09:43.015 real 0m4.518s 00:09:43.015 user 0m1.399s 00:09:43.015 sys 0m0.197s 00:09:43.015 06:35:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:43.015 06:35:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:43.015 ************************************ 00:09:43.015 END TEST locking_overlapped_coremask_via_rpc 00:09:43.015 ************************************ 00:09:43.015 06:35:55 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:09:43.015 06:35:55 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59521 ]] 00:09:43.015 06:35:55 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59521 00:09:43.015 06:35:55 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59521 ']' 00:09:43.015 06:35:55 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59521 00:09:43.015 06:35:55 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:09:43.015 06:35:55 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:43.015 06:35:55 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59521 00:09:43.015 killing process with pid 59521 00:09:43.015 06:35:55 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:43.015 06:35:55 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:43.015 06:35:55 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59521' 00:09:43.015 06:35:55 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59521 00:09:43.015 06:35:55 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59521 00:09:45.011 06:35:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59545 ]] 00:09:45.011 06:35:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59545 00:09:45.011 06:35:57 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59545 ']' 00:09:45.011 06:35:57 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59545 00:09:45.011 06:35:57 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:09:45.011 06:35:57 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:45.011 06:35:57 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59545 00:09:45.011 killing process with pid 59545 00:09:45.011 06:35:57 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:09:45.011 06:35:57 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:09:45.011 06:35:57 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59545' 00:09:45.011 06:35:57 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59545 00:09:45.011 06:35:57 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59545 00:09:46.925 06:35:59 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:46.925 06:35:59 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:09:46.925 06:35:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59521 ]] 00:09:46.925 06:35:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59521 00:09:46.925 06:35:59 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59521 ']' 00:09:46.925 Process with pid 59521 is not found 00:09:46.925 06:35:59 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59521 00:09:46.925 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59521) - No such process 00:09:46.925 06:35:59 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59521 is not found' 00:09:46.925 06:35:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59545 ]] 00:09:46.925 06:35:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59545 00:09:46.925 06:35:59 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59545 ']' 00:09:46.926 06:35:59 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59545 00:09:46.926 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59545) - No such process 00:09:46.926 Process with pid 59545 is not found 00:09:46.926 06:35:59 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59545 is not found' 00:09:46.926 06:35:59 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:46.926 00:09:46.926 real 0m38.883s 00:09:46.926 user 1m9.699s 00:09:46.926 sys 0m5.758s 00:09:46.926 06:35:59 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:46.926 ************************************ 00:09:46.926 END TEST cpu_locks 00:09:46.926 ************************************ 00:09:46.926 06:35:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:46.926 ************************************ 00:09:46.926 END TEST event 00:09:46.926 ************************************ 00:09:46.926 00:09:46.926 real 1m6.715s 00:09:46.926 user 2m4.353s 00:09:46.926 sys 0m9.229s 00:09:46.926 06:35:59 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:46.926 06:35:59 event -- common/autotest_common.sh@10 -- # set +x 00:09:46.926 06:35:59 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:09:46.926 06:35:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:46.926 06:35:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:46.926 06:35:59 -- common/autotest_common.sh@10 -- # set +x 00:09:46.926 ************************************ 00:09:46.926 START TEST thread 00:09:46.926 ************************************ 00:09:46.926 06:35:59 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:09:47.187 * Looking for test storage... 00:09:47.187 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:09:47.187 06:35:59 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:47.187 06:35:59 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:09:47.187 06:35:59 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:47.188 06:35:59 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:47.188 06:35:59 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:47.188 06:35:59 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:47.188 06:35:59 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:47.188 06:35:59 thread -- scripts/common.sh@336 -- # IFS=.-: 00:09:47.188 06:35:59 thread -- scripts/common.sh@336 -- # read -ra ver1 00:09:47.188 06:35:59 thread -- scripts/common.sh@337 -- # IFS=.-: 00:09:47.188 06:35:59 thread -- scripts/common.sh@337 -- # read -ra ver2 00:09:47.188 06:35:59 thread -- scripts/common.sh@338 -- # local 'op=<' 00:09:47.188 06:35:59 thread -- scripts/common.sh@340 -- # ver1_l=2 00:09:47.188 06:35:59 thread -- scripts/common.sh@341 -- # ver2_l=1 00:09:47.188 06:35:59 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:47.188 06:35:59 thread -- scripts/common.sh@344 -- # case "$op" in 00:09:47.188 06:35:59 thread -- scripts/common.sh@345 -- # : 1 00:09:47.188 06:35:59 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:47.188 06:35:59 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:47.188 06:35:59 thread -- scripts/common.sh@365 -- # decimal 1 00:09:47.188 06:35:59 thread -- scripts/common.sh@353 -- # local d=1 00:09:47.188 06:35:59 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:47.188 06:35:59 thread -- scripts/common.sh@355 -- # echo 1 00:09:47.188 06:35:59 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:09:47.188 06:35:59 thread -- scripts/common.sh@366 -- # decimal 2 00:09:47.188 06:35:59 thread -- scripts/common.sh@353 -- # local d=2 00:09:47.188 06:35:59 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:47.188 06:35:59 thread -- scripts/common.sh@355 -- # echo 2 00:09:47.188 06:35:59 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:09:47.188 06:35:59 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:47.188 06:35:59 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:47.188 06:35:59 thread -- scripts/common.sh@368 -- # return 0 00:09:47.188 06:35:59 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:47.188 06:35:59 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:47.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.188 --rc genhtml_branch_coverage=1 00:09:47.188 --rc genhtml_function_coverage=1 00:09:47.188 --rc genhtml_legend=1 00:09:47.188 --rc geninfo_all_blocks=1 00:09:47.188 --rc geninfo_unexecuted_blocks=1 00:09:47.188 00:09:47.188 ' 00:09:47.188 06:35:59 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:47.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.188 --rc genhtml_branch_coverage=1 00:09:47.188 --rc genhtml_function_coverage=1 00:09:47.188 --rc genhtml_legend=1 00:09:47.188 --rc geninfo_all_blocks=1 00:09:47.188 --rc geninfo_unexecuted_blocks=1 00:09:47.188 00:09:47.188 ' 00:09:47.188 06:35:59 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:47.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.188 --rc genhtml_branch_coverage=1 00:09:47.188 --rc genhtml_function_coverage=1 00:09:47.188 --rc genhtml_legend=1 00:09:47.188 --rc geninfo_all_blocks=1 00:09:47.188 --rc geninfo_unexecuted_blocks=1 00:09:47.188 00:09:47.188 ' 00:09:47.188 06:35:59 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:47.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.188 --rc genhtml_branch_coverage=1 00:09:47.188 --rc genhtml_function_coverage=1 00:09:47.188 --rc genhtml_legend=1 00:09:47.188 --rc geninfo_all_blocks=1 00:09:47.188 --rc geninfo_unexecuted_blocks=1 00:09:47.188 00:09:47.188 ' 00:09:47.188 06:35:59 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:47.188 06:35:59 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:09:47.188 06:35:59 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:47.188 06:35:59 thread -- common/autotest_common.sh@10 -- # set +x 00:09:47.188 ************************************ 00:09:47.188 START TEST thread_poller_perf 00:09:47.188 ************************************ 00:09:47.188 06:35:59 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:47.188 [2024-12-06 06:35:59.857184] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:09:47.188 [2024-12-06 06:35:59.858301] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59729 ] 00:09:47.450 [2024-12-06 06:36:00.040499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.710 [2024-12-06 06:36:00.189043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.710 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:09:48.655 [2024-12-06T06:36:01.396Z] ====================================== 00:09:48.655 [2024-12-06T06:36:01.396Z] busy:2614692088 (cyc) 00:09:48.655 [2024-12-06T06:36:01.396Z] total_run_count: 303000 00:09:48.655 [2024-12-06T06:36:01.396Z] tsc_hz: 2600000000 (cyc) 00:09:48.655 [2024-12-06T06:36:01.397Z] ====================================== 00:09:48.656 [2024-12-06T06:36:01.397Z] poller_cost: 8629 (cyc), 3318 (nsec) 00:09:48.656 ************************************ 00:09:48.656 END TEST thread_poller_perf 00:09:48.656 ************************************ 00:09:48.656 00:09:48.656 real 0m1.551s 00:09:48.656 user 0m1.340s 00:09:48.656 sys 0m0.099s 00:09:48.656 06:36:01 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:48.656 06:36:01 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:48.916 06:36:01 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:48.916 06:36:01 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:09:48.916 06:36:01 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:48.916 06:36:01 thread -- common/autotest_common.sh@10 -- # set +x 00:09:48.916 ************************************ 00:09:48.916 START TEST thread_poller_perf 00:09:48.916 ************************************ 00:09:48.916 06:36:01 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:48.916 [2024-12-06 06:36:01.467274] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:09:48.916 [2024-12-06 06:36:01.468069] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59771 ] 00:09:48.916 [2024-12-06 06:36:01.634062] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.178 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:09:49.178 [2024-12-06 06:36:01.753952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.563 [2024-12-06T06:36:03.304Z] ====================================== 00:09:50.563 [2024-12-06T06:36:03.304Z] busy:2603444548 (cyc) 00:09:50.563 [2024-12-06T06:36:03.304Z] total_run_count: 3626000 00:09:50.563 [2024-12-06T06:36:03.304Z] tsc_hz: 2600000000 (cyc) 00:09:50.563 [2024-12-06T06:36:03.304Z] ====================================== 00:09:50.563 [2024-12-06T06:36:03.304Z] poller_cost: 717 (cyc), 275 (nsec) 00:09:50.563 ************************************ 00:09:50.563 END TEST thread_poller_perf 00:09:50.563 ************************************ 00:09:50.563 00:09:50.563 real 0m1.500s 00:09:50.563 user 0m1.301s 00:09:50.563 sys 0m0.088s 00:09:50.563 06:36:02 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:50.563 06:36:02 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:50.563 06:36:02 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:09:50.563 00:09:50.563 real 0m3.385s 00:09:50.563 user 0m2.786s 00:09:50.563 sys 0m0.330s 00:09:50.563 ************************************ 00:09:50.563 END TEST thread 00:09:50.563 ************************************ 00:09:50.563 06:36:02 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:50.563 06:36:02 thread -- common/autotest_common.sh@10 -- # set +x 00:09:50.563 06:36:03 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:09:50.563 06:36:03 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:09:50.563 06:36:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:50.563 06:36:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:50.563 06:36:03 -- common/autotest_common.sh@10 -- # set +x 00:09:50.563 ************************************ 00:09:50.563 START TEST app_cmdline 00:09:50.563 ************************************ 00:09:50.563 06:36:03 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:09:50.563 * Looking for test storage... 00:09:50.563 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:09:50.563 06:36:03 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:50.563 06:36:03 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:09:50.563 06:36:03 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:50.563 06:36:03 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:50.563 06:36:03 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:50.563 06:36:03 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:50.563 06:36:03 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:50.563 06:36:03 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:09:50.563 06:36:03 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:09:50.563 06:36:03 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:09:50.563 06:36:03 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:09:50.563 06:36:03 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:09:50.563 06:36:03 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:09:50.563 06:36:03 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:09:50.563 06:36:03 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:50.563 06:36:03 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:09:50.563 06:36:03 app_cmdline -- scripts/common.sh@345 -- # : 1 00:09:50.563 06:36:03 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:50.563 06:36:03 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:50.563 06:36:03 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:09:50.563 06:36:03 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:09:50.563 06:36:03 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:50.563 06:36:03 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:09:50.563 06:36:03 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:09:50.563 06:36:03 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:09:50.563 06:36:03 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:09:50.563 06:36:03 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:50.563 06:36:03 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:09:50.563 06:36:03 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:09:50.563 06:36:03 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:50.563 06:36:03 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:50.563 06:36:03 app_cmdline -- scripts/common.sh@368 -- # return 0 00:09:50.563 06:36:03 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:50.563 06:36:03 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:50.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.563 --rc genhtml_branch_coverage=1 00:09:50.563 --rc genhtml_function_coverage=1 00:09:50.563 --rc genhtml_legend=1 00:09:50.563 --rc geninfo_all_blocks=1 00:09:50.563 --rc geninfo_unexecuted_blocks=1 00:09:50.563 00:09:50.563 ' 00:09:50.563 06:36:03 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:50.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.563 --rc genhtml_branch_coverage=1 00:09:50.563 --rc genhtml_function_coverage=1 00:09:50.563 --rc genhtml_legend=1 00:09:50.563 --rc geninfo_all_blocks=1 00:09:50.563 --rc geninfo_unexecuted_blocks=1 00:09:50.563 00:09:50.563 ' 00:09:50.563 06:36:03 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:50.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.563 --rc genhtml_branch_coverage=1 00:09:50.563 --rc genhtml_function_coverage=1 00:09:50.563 --rc genhtml_legend=1 00:09:50.563 --rc geninfo_all_blocks=1 00:09:50.563 --rc geninfo_unexecuted_blocks=1 00:09:50.563 00:09:50.563 ' 00:09:50.563 06:36:03 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:50.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.563 --rc genhtml_branch_coverage=1 00:09:50.563 --rc genhtml_function_coverage=1 00:09:50.563 --rc genhtml_legend=1 00:09:50.563 --rc geninfo_all_blocks=1 00:09:50.563 --rc geninfo_unexecuted_blocks=1 00:09:50.563 00:09:50.563 ' 00:09:50.563 06:36:03 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:09:50.563 06:36:03 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59849 00:09:50.563 06:36:03 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:09:50.563 06:36:03 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59849 00:09:50.563 06:36:03 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59849 ']' 00:09:50.564 06:36:03 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:50.564 06:36:03 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:50.564 06:36:03 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:50.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:50.564 06:36:03 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:50.564 06:36:03 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:50.885 [2024-12-06 06:36:03.339161] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:09:50.885 [2024-12-06 06:36:03.339716] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59849 ] 00:09:50.885 [2024-12-06 06:36:03.522552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.146 [2024-12-06 06:36:03.666837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.719 06:36:04 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:51.719 06:36:04 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:09:51.719 06:36:04 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:09:51.985 { 00:09:51.985 "version": "SPDK v25.01-pre git sha1 0b1b15acc", 00:09:51.985 "fields": { 00:09:51.985 "major": 25, 00:09:51.985 "minor": 1, 00:09:51.985 "patch": 0, 00:09:51.985 "suffix": "-pre", 00:09:51.985 "commit": "0b1b15acc" 00:09:51.985 } 00:09:51.985 } 00:09:51.985 06:36:04 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:09:51.985 06:36:04 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:09:51.985 06:36:04 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:09:51.985 06:36:04 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:09:51.985 06:36:04 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:09:51.985 06:36:04 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:09:51.985 06:36:04 app_cmdline -- app/cmdline.sh@26 -- # sort 00:09:51.985 06:36:04 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.985 06:36:04 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:51.985 06:36:04 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.985 06:36:04 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:09:51.985 06:36:04 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:09:51.985 06:36:04 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:51.985 06:36:04 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:09:51.985 06:36:04 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:51.985 06:36:04 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:51.985 06:36:04 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:51.985 06:36:04 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:51.985 06:36:04 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:51.985 06:36:04 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:51.985 06:36:04 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:51.985 06:36:04 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:51.985 06:36:04 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:51.985 06:36:04 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:52.558 request: 00:09:52.558 { 00:09:52.558 "method": "env_dpdk_get_mem_stats", 00:09:52.558 "req_id": 1 00:09:52.558 } 00:09:52.558 Got JSON-RPC error response 00:09:52.558 response: 00:09:52.558 { 00:09:52.558 "code": -32601, 00:09:52.558 "message": "Method not found" 00:09:52.558 } 00:09:52.558 06:36:05 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:09:52.558 06:36:05 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:52.558 06:36:05 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:52.558 06:36:05 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:52.558 06:36:05 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59849 00:09:52.558 06:36:05 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59849 ']' 00:09:52.558 06:36:05 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59849 00:09:52.558 06:36:05 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:09:52.558 06:36:05 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:52.558 06:36:05 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59849 00:09:52.558 killing process with pid 59849 00:09:52.558 06:36:05 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:52.558 06:36:05 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:52.558 06:36:05 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59849' 00:09:52.558 06:36:05 app_cmdline -- common/autotest_common.sh@973 -- # kill 59849 00:09:52.558 06:36:05 app_cmdline -- common/autotest_common.sh@978 -- # wait 59849 00:09:54.511 00:09:54.511 real 0m3.817s 00:09:54.511 user 0m4.087s 00:09:54.511 sys 0m0.642s 00:09:54.511 06:36:06 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:54.511 06:36:06 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:54.511 ************************************ 00:09:54.511 END TEST app_cmdline 00:09:54.511 ************************************ 00:09:54.511 06:36:06 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:09:54.511 06:36:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:54.511 06:36:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:54.511 06:36:06 -- common/autotest_common.sh@10 -- # set +x 00:09:54.511 ************************************ 00:09:54.511 START TEST version 00:09:54.511 ************************************ 00:09:54.511 06:36:06 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:09:54.511 * Looking for test storage... 00:09:54.511 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:09:54.511 06:36:07 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:54.511 06:36:07 version -- common/autotest_common.sh@1711 -- # lcov --version 00:09:54.511 06:36:07 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:54.511 06:36:07 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:54.511 06:36:07 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:54.511 06:36:07 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:54.511 06:36:07 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:54.511 06:36:07 version -- scripts/common.sh@336 -- # IFS=.-: 00:09:54.511 06:36:07 version -- scripts/common.sh@336 -- # read -ra ver1 00:09:54.511 06:36:07 version -- scripts/common.sh@337 -- # IFS=.-: 00:09:54.511 06:36:07 version -- scripts/common.sh@337 -- # read -ra ver2 00:09:54.511 06:36:07 version -- scripts/common.sh@338 -- # local 'op=<' 00:09:54.511 06:36:07 version -- scripts/common.sh@340 -- # ver1_l=2 00:09:54.511 06:36:07 version -- scripts/common.sh@341 -- # ver2_l=1 00:09:54.511 06:36:07 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:54.511 06:36:07 version -- scripts/common.sh@344 -- # case "$op" in 00:09:54.511 06:36:07 version -- scripts/common.sh@345 -- # : 1 00:09:54.511 06:36:07 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:54.511 06:36:07 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:54.511 06:36:07 version -- scripts/common.sh@365 -- # decimal 1 00:09:54.511 06:36:07 version -- scripts/common.sh@353 -- # local d=1 00:09:54.511 06:36:07 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:54.511 06:36:07 version -- scripts/common.sh@355 -- # echo 1 00:09:54.511 06:36:07 version -- scripts/common.sh@365 -- # ver1[v]=1 00:09:54.511 06:36:07 version -- scripts/common.sh@366 -- # decimal 2 00:09:54.511 06:36:07 version -- scripts/common.sh@353 -- # local d=2 00:09:54.511 06:36:07 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:54.511 06:36:07 version -- scripts/common.sh@355 -- # echo 2 00:09:54.511 06:36:07 version -- scripts/common.sh@366 -- # ver2[v]=2 00:09:54.511 06:36:07 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:54.511 06:36:07 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:54.511 06:36:07 version -- scripts/common.sh@368 -- # return 0 00:09:54.511 06:36:07 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:54.511 06:36:07 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:54.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.511 --rc genhtml_branch_coverage=1 00:09:54.511 --rc genhtml_function_coverage=1 00:09:54.511 --rc genhtml_legend=1 00:09:54.511 --rc geninfo_all_blocks=1 00:09:54.511 --rc geninfo_unexecuted_blocks=1 00:09:54.511 00:09:54.511 ' 00:09:54.511 06:36:07 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:54.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.511 --rc genhtml_branch_coverage=1 00:09:54.511 --rc genhtml_function_coverage=1 00:09:54.511 --rc genhtml_legend=1 00:09:54.511 --rc geninfo_all_blocks=1 00:09:54.511 --rc geninfo_unexecuted_blocks=1 00:09:54.511 00:09:54.511 ' 00:09:54.511 06:36:07 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:54.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.511 --rc genhtml_branch_coverage=1 00:09:54.511 --rc genhtml_function_coverage=1 00:09:54.511 --rc genhtml_legend=1 00:09:54.511 --rc geninfo_all_blocks=1 00:09:54.511 --rc geninfo_unexecuted_blocks=1 00:09:54.511 00:09:54.511 ' 00:09:54.511 06:36:07 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:54.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.511 --rc genhtml_branch_coverage=1 00:09:54.511 --rc genhtml_function_coverage=1 00:09:54.511 --rc genhtml_legend=1 00:09:54.511 --rc geninfo_all_blocks=1 00:09:54.511 --rc geninfo_unexecuted_blocks=1 00:09:54.511 00:09:54.511 ' 00:09:54.511 06:36:07 version -- app/version.sh@17 -- # get_header_version major 00:09:54.511 06:36:07 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:54.511 06:36:07 version -- app/version.sh@14 -- # cut -f2 00:09:54.511 06:36:07 version -- app/version.sh@14 -- # tr -d '"' 00:09:54.511 06:36:07 version -- app/version.sh@17 -- # major=25 00:09:54.511 06:36:07 version -- app/version.sh@18 -- # get_header_version minor 00:09:54.511 06:36:07 version -- app/version.sh@14 -- # tr -d '"' 00:09:54.511 06:36:07 version -- app/version.sh@14 -- # cut -f2 00:09:54.511 06:36:07 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:54.511 06:36:07 version -- app/version.sh@18 -- # minor=1 00:09:54.511 06:36:07 version -- app/version.sh@19 -- # get_header_version patch 00:09:54.511 06:36:07 version -- app/version.sh@14 -- # cut -f2 00:09:54.512 06:36:07 version -- app/version.sh@14 -- # tr -d '"' 00:09:54.512 06:36:07 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:54.512 06:36:07 version -- app/version.sh@19 -- # patch=0 00:09:54.512 06:36:07 version -- app/version.sh@20 -- # get_header_version suffix 00:09:54.512 06:36:07 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:54.512 06:36:07 version -- app/version.sh@14 -- # tr -d '"' 00:09:54.512 06:36:07 version -- app/version.sh@14 -- # cut -f2 00:09:54.512 06:36:07 version -- app/version.sh@20 -- # suffix=-pre 00:09:54.512 06:36:07 version -- app/version.sh@22 -- # version=25.1 00:09:54.512 06:36:07 version -- app/version.sh@25 -- # (( patch != 0 )) 00:09:54.512 06:36:07 version -- app/version.sh@28 -- # version=25.1rc0 00:09:54.512 06:36:07 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:09:54.512 06:36:07 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:09:54.512 06:36:07 version -- app/version.sh@30 -- # py_version=25.1rc0 00:09:54.512 06:36:07 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:09:54.512 00:09:54.512 real 0m0.237s 00:09:54.512 user 0m0.148s 00:09:54.512 sys 0m0.115s 00:09:54.512 ************************************ 00:09:54.512 END TEST version 00:09:54.512 ************************************ 00:09:54.512 06:36:07 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:54.512 06:36:07 version -- common/autotest_common.sh@10 -- # set +x 00:09:54.512 06:36:07 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:09:54.512 06:36:07 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:09:54.512 06:36:07 -- spdk/autotest.sh@194 -- # uname -s 00:09:54.512 06:36:07 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:09:54.512 06:36:07 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:09:54.512 06:36:07 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:09:54.512 06:36:07 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:09:54.512 06:36:07 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:09:54.512 06:36:07 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:54.512 06:36:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:54.512 06:36:07 -- common/autotest_common.sh@10 -- # set +x 00:09:54.512 ************************************ 00:09:54.512 START TEST blockdev_nvme 00:09:54.512 ************************************ 00:09:54.512 06:36:07 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:09:54.773 * Looking for test storage... 00:09:54.773 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:09:54.773 06:36:07 blockdev_nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:54.773 06:36:07 blockdev_nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:09:54.773 06:36:07 blockdev_nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:54.773 06:36:07 blockdev_nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:54.773 06:36:07 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:54.773 06:36:07 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:54.773 06:36:07 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:54.773 06:36:07 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:09:54.773 06:36:07 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:09:54.773 06:36:07 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:09:54.773 06:36:07 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:09:54.773 06:36:07 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:09:54.773 06:36:07 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:09:54.773 06:36:07 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:09:54.773 06:36:07 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:54.773 06:36:07 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:09:54.773 06:36:07 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:09:54.773 06:36:07 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:54.773 06:36:07 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:54.773 06:36:07 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:09:54.773 06:36:07 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:09:54.773 06:36:07 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:54.773 06:36:07 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:09:54.773 06:36:07 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:09:54.773 06:36:07 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:09:54.773 06:36:07 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:09:54.773 06:36:07 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:54.773 06:36:07 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:09:54.773 06:36:07 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:09:54.773 06:36:07 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:54.773 06:36:07 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:54.773 06:36:07 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:09:54.773 06:36:07 blockdev_nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:54.773 06:36:07 blockdev_nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:54.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.773 --rc genhtml_branch_coverage=1 00:09:54.773 --rc genhtml_function_coverage=1 00:09:54.773 --rc genhtml_legend=1 00:09:54.773 --rc geninfo_all_blocks=1 00:09:54.773 --rc geninfo_unexecuted_blocks=1 00:09:54.773 00:09:54.773 ' 00:09:54.773 06:36:07 blockdev_nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:54.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.773 --rc genhtml_branch_coverage=1 00:09:54.773 --rc genhtml_function_coverage=1 00:09:54.773 --rc genhtml_legend=1 00:09:54.773 --rc geninfo_all_blocks=1 00:09:54.773 --rc geninfo_unexecuted_blocks=1 00:09:54.773 00:09:54.773 ' 00:09:54.773 06:36:07 blockdev_nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:54.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.773 --rc genhtml_branch_coverage=1 00:09:54.773 --rc genhtml_function_coverage=1 00:09:54.773 --rc genhtml_legend=1 00:09:54.773 --rc geninfo_all_blocks=1 00:09:54.773 --rc geninfo_unexecuted_blocks=1 00:09:54.773 00:09:54.773 ' 00:09:54.773 06:36:07 blockdev_nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:54.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.773 --rc genhtml_branch_coverage=1 00:09:54.773 --rc genhtml_function_coverage=1 00:09:54.773 --rc genhtml_legend=1 00:09:54.773 --rc geninfo_all_blocks=1 00:09:54.773 --rc geninfo_unexecuted_blocks=1 00:09:54.773 00:09:54.773 ' 00:09:54.773 06:36:07 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:54.773 06:36:07 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:09:54.773 06:36:07 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:09:54.773 06:36:07 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:54.773 06:36:07 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:09:54.773 06:36:07 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:09:54.773 06:36:07 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:09:54.773 06:36:07 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:09:54.773 06:36:07 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:09:54.773 06:36:07 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:09:54.773 06:36:07 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:09:54.773 06:36:07 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:09:54.773 06:36:07 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:09:54.773 06:36:07 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:09:54.773 06:36:07 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:09:54.773 06:36:07 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:09:54.773 06:36:07 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:09:54.773 06:36:07 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:09:54.773 06:36:07 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:09:54.773 06:36:07 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:09:54.773 06:36:07 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:09:54.773 06:36:07 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:09:54.773 06:36:07 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:09:54.773 06:36:07 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:09:54.773 06:36:07 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60032 00:09:54.773 06:36:07 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:54.773 06:36:07 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 60032 00:09:54.773 06:36:07 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 60032 ']' 00:09:54.773 06:36:07 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:54.773 06:36:07 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:54.773 06:36:07 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:54.773 06:36:07 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:54.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:54.773 06:36:07 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:54.773 06:36:07 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:54.773 [2024-12-06 06:36:07.505369] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:09:54.773 [2024-12-06 06:36:07.505769] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60032 ] 00:09:55.062 [2024-12-06 06:36:07.672783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.323 [2024-12-06 06:36:07.817553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.895 06:36:08 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:55.895 06:36:08 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:09:55.895 06:36:08 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:09:55.895 06:36:08 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:09:55.895 06:36:08 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:09:55.895 06:36:08 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:09:55.895 06:36:08 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:55.896 06:36:08 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:09:55.896 06:36:08 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.896 06:36:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:56.469 06:36:08 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.469 06:36:08 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:09:56.469 06:36:08 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.469 06:36:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:56.469 06:36:08 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.469 06:36:08 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:09:56.469 06:36:08 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:09:56.469 06:36:08 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.469 06:36:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:56.469 06:36:08 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.469 06:36:08 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:09:56.469 06:36:08 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.469 06:36:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:56.469 06:36:08 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.469 06:36:08 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:09:56.469 06:36:08 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.469 06:36:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:56.469 06:36:08 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.469 06:36:08 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:09:56.469 06:36:08 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:09:56.469 06:36:08 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.469 06:36:08 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:09:56.469 06:36:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:56.469 06:36:09 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.469 06:36:09 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:09:56.469 06:36:09 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:09:56.470 06:36:09 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "271d5ee1-099d-4574-98c6-d70bffaab820"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "271d5ee1-099d-4574-98c6-d70bffaab820",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "05de06e2-0d91-41f7-be22-91d6dec3f486"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "05de06e2-0d91-41f7-be22-91d6dec3f486",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "d77b5814-6ffb-4878-8395-a928aba6ae57"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d77b5814-6ffb-4878-8395-a928aba6ae57",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "ce2406bc-3722-4ecb-bc35-6c9159ccef0a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ce2406bc-3722-4ecb-bc35-6c9159ccef0a",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "56be2e06-6fb1-48dc-b75f-0f541ee8b460"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "56be2e06-6fb1-48dc-b75f-0f541ee8b460",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "976445d4-faaf-4c2a-a205-4212c28fbc91"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "976445d4-faaf-4c2a-a205-4212c28fbc91",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:09:56.470 06:36:09 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:09:56.470 06:36:09 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:09:56.470 06:36:09 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:09:56.470 06:36:09 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 60032 00:09:56.470 06:36:09 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 60032 ']' 00:09:56.470 06:36:09 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 60032 00:09:56.470 06:36:09 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:09:56.470 06:36:09 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:56.470 06:36:09 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60032 00:09:56.470 killing process with pid 60032 00:09:56.470 06:36:09 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:56.470 06:36:09 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:56.470 06:36:09 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60032' 00:09:56.470 06:36:09 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 60032 00:09:56.470 06:36:09 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 60032 00:09:58.385 06:36:10 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:58.385 06:36:10 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:09:58.385 06:36:10 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:09:58.385 06:36:10 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:58.385 06:36:10 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:58.385 ************************************ 00:09:58.385 START TEST bdev_hello_world 00:09:58.385 ************************************ 00:09:58.385 06:36:10 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:09:58.385 [2024-12-06 06:36:10.978719] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:09:58.385 [2024-12-06 06:36:10.978892] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60127 ] 00:09:58.645 [2024-12-06 06:36:11.146048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.645 [2024-12-06 06:36:11.289087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.218 [2024-12-06 06:36:11.899788] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:09:59.218 [2024-12-06 06:36:11.899875] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:09:59.218 [2024-12-06 06:36:11.899906] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:09:59.218 [2024-12-06 06:36:11.902798] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:09:59.480 [2024-12-06 06:36:12.034097] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:09:59.480 [2024-12-06 06:36:12.034200] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:09:59.480 [2024-12-06 06:36:12.034863] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:09:59.480 00:09:59.480 [2024-12-06 06:36:12.034895] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:10:00.423 00:10:00.423 real 0m1.992s 00:10:00.423 user 0m1.625s 00:10:00.423 sys 0m0.252s 00:10:00.423 06:36:12 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:00.423 ************************************ 00:10:00.423 END TEST bdev_hello_world 00:10:00.423 ************************************ 00:10:00.423 06:36:12 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:10:00.423 06:36:12 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:10:00.423 06:36:12 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:00.423 06:36:12 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:00.423 06:36:12 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:00.423 ************************************ 00:10:00.423 START TEST bdev_bounds 00:10:00.423 ************************************ 00:10:00.423 06:36:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:10:00.423 Process bdevio pid: 60169 00:10:00.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.423 06:36:12 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=60169 00:10:00.423 06:36:12 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:10:00.423 06:36:12 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:10:00.423 06:36:12 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 60169' 00:10:00.423 06:36:12 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 60169 00:10:00.423 06:36:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 60169 ']' 00:10:00.424 06:36:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.424 06:36:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:00.424 06:36:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.424 06:36:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:00.424 06:36:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:10:00.424 [2024-12-06 06:36:13.080104] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:10:00.424 [2024-12-06 06:36:13.080379] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60169 ] 00:10:00.685 [2024-12-06 06:36:13.263957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:00.685 [2024-12-06 06:36:13.404309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:00.685 [2024-12-06 06:36:13.404977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:00.685 [2024-12-06 06:36:13.404986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.629 06:36:14 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:01.629 06:36:14 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:10:01.629 06:36:14 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:10:01.629 I/O targets: 00:10:01.629 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:10:01.629 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:10:01.629 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:01.629 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:01.629 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:01.629 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:10:01.629 00:10:01.629 00:10:01.629 CUnit - A unit testing framework for C - Version 2.1-3 00:10:01.629 http://cunit.sourceforge.net/ 00:10:01.629 00:10:01.629 00:10:01.629 Suite: bdevio tests on: Nvme3n1 00:10:01.629 Test: blockdev write read block ...passed 00:10:01.629 Test: blockdev write zeroes read block ...passed 00:10:01.629 Test: blockdev write zeroes read no split ...passed 00:10:01.629 Test: blockdev write zeroes read split ...passed 00:10:01.629 Test: blockdev write zeroes read split partial ...passed 00:10:01.629 Test: blockdev reset ...[2024-12-06 06:36:14.251457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:10:01.629 passed 00:10:01.629 Test: blockdev write read 8 blocks ...[2024-12-06 06:36:14.257167] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:10:01.629 passed 00:10:01.629 Test: blockdev write read size > 128k ...passed 00:10:01.629 Test: blockdev write read invalid size ...passed 00:10:01.629 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:01.629 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:01.629 Test: blockdev write read max offset ...passed 00:10:01.629 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:01.629 Test: blockdev writev readv 8 blocks ...passed 00:10:01.629 Test: blockdev writev readv 30 x 1block ...passed 00:10:01.629 Test: blockdev writev readv block ...passed 00:10:01.629 Test: blockdev writev readv size > 128k ...passed 00:10:01.629 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:01.629 Test: blockdev comparev and writev ...[2024-12-06 06:36:14.280176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:10:01.629 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2bbc0a000 len:0x1000 00:10:01.629 [2024-12-06 06:36:14.280387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:01.630 passed 00:10:01.630 Test: blockdev nvme passthru vendor specific ...passed 00:10:01.630 Test: blockdev nvme admin passthru ...[2024-12-06 06:36:14.283006] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:01.630 [2024-12-06 06:36:14.283060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:01.630 passed 00:10:01.630 Test: blockdev copy ...passed 00:10:01.630 Suite: bdevio tests on: Nvme2n3 00:10:01.630 Test: blockdev write read block ...passed 00:10:01.630 Test: blockdev write zeroes read block ...passed 00:10:01.630 Test: blockdev write zeroes read no split ...passed 00:10:01.630 Test: blockdev write zeroes read split ...passed 00:10:01.630 Test: blockdev write zeroes read split partial ...passed 00:10:01.630 Test: blockdev reset ...[2024-12-06 06:36:14.357893] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:10:01.630 [2024-12-06 06:36:14.365632] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:10:01.630 passed 00:10:01.892 Test: blockdev write read 8 blocks ...passed 00:10:01.892 Test: blockdev write read size > 128k ...passed 00:10:01.892 Test: blockdev write read invalid size ...passed 00:10:01.892 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:01.892 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:01.892 Test: blockdev write read max offset ...passed 00:10:01.892 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:01.892 Test: blockdev writev readv 8 blocks ...passed 00:10:01.892 Test: blockdev writev readv 30 x 1block ...passed 00:10:01.892 Test: blockdev writev readv block ...passed 00:10:01.892 Test: blockdev writev readv size > 128k ...passed 00:10:01.892 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:01.892 Test: blockdev comparev and writev ...[2024-12-06 06:36:14.390502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c0006000 len:0x1000 00:10:01.892 [2024-12-06 06:36:14.390731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:01.892 passed 00:10:01.892 Test: blockdev nvme passthru rw ...passed 00:10:01.892 Test: blockdev nvme passthru vendor specific ...passed 00:10:01.892 Test: blockdev nvme admin passthru ...[2024-12-06 06:36:14.393274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:01.892 [2024-12-06 06:36:14.393331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:01.892 passed 00:10:01.892 Test: blockdev copy ...passed 00:10:01.892 Suite: bdevio tests on: Nvme2n2 00:10:01.892 Test: blockdev write read block ...passed 00:10:01.893 Test: blockdev write zeroes read block ...passed 00:10:01.893 Test: blockdev write zeroes read no split ...passed 00:10:01.893 Test: blockdev write zeroes read split ...passed 00:10:01.893 Test: blockdev write zeroes read split partial ...passed 00:10:01.893 Test: blockdev reset ...[2024-12-06 06:36:14.463196] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:10:01.893 [2024-12-06 06:36:14.470576] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:10:01.893 passed 00:10:01.893 Test: blockdev write read 8 blocks ...passed 00:10:01.893 Test: blockdev write read size > 128k ...passed 00:10:01.893 Test: blockdev write read invalid size ...passed 00:10:01.893 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:01.893 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:01.893 Test: blockdev write read max offset ...passed 00:10:01.893 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:01.893 Test: blockdev writev readv 8 blocks ...passed 00:10:01.893 Test: blockdev writev readv 30 x 1block ...passed 00:10:01.893 Test: blockdev writev readv block ...passed 00:10:01.893 Test: blockdev writev readv size > 128k ...passed 00:10:01.893 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:01.893 Test: blockdev comparev and writev ...[2024-12-06 06:36:14.494423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2df43c000 len:0x1000 00:10:01.893 [2024-12-06 06:36:14.494636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:01.893 passed 00:10:01.893 Test: blockdev nvme passthru rw ...passed 00:10:01.893 Test: blockdev nvme passthru vendor specific ...[2024-12-06 06:36:14.496804] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:01.893 [2024-12-06 06:36:14.496852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:01.893 passed 00:10:01.893 Test: blockdev nvme admin passthru ...passed 00:10:01.893 Test: blockdev copy ...passed 00:10:01.893 Suite: bdevio tests on: Nvme2n1 00:10:01.893 Test: blockdev write read block ...passed 00:10:01.893 Test: blockdev write zeroes read block ...passed 00:10:01.893 Test: blockdev write zeroes read no split ...passed 00:10:01.893 Test: blockdev write zeroes read split ...passed 00:10:01.893 Test: blockdev write zeroes read split partial ...passed 00:10:01.893 Test: blockdev reset ...[2024-12-06 06:36:14.568397] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:10:01.893 [2024-12-06 06:36:14.574985] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:10:01.893 Test: blockdev write read 8 blocks ...uccessful. 00:10:01.893 passed 00:10:01.893 Test: blockdev write read size > 128k ...passed 00:10:01.893 Test: blockdev write read invalid size ...passed 00:10:01.893 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:01.893 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:01.893 Test: blockdev write read max offset ...passed 00:10:01.893 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:01.893 Test: blockdev writev readv 8 blocks ...passed 00:10:01.893 Test: blockdev writev readv 30 x 1block ...passed 00:10:01.893 Test: blockdev writev readv block ...passed 00:10:01.893 Test: blockdev writev readv size > 128k ...passed 00:10:01.893 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:01.893 Test: blockdev comparev and writev ...[2024-12-06 06:36:14.596107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2df438000 len:0x1000 00:10:01.893 [2024-12-06 06:36:14.596219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:01.893 passed 00:10:01.893 Test: blockdev nvme passthru rw ...passed 00:10:01.893 Test: blockdev nvme passthru vendor specific ...passed 00:10:01.893 Test: blockdev nvme admin passthru ...[2024-12-06 06:36:14.599114] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:01.893 [2024-12-06 06:36:14.599162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:01.893 passed 00:10:01.893 Test: blockdev copy ...passed 00:10:01.893 Suite: bdevio tests on: Nvme1n1 00:10:01.893 Test: blockdev write read block ...passed 00:10:01.893 Test: blockdev write zeroes read block ...passed 00:10:01.893 Test: blockdev write zeroes read no split ...passed 00:10:02.158 Test: blockdev write zeroes read split ...passed 00:10:02.158 Test: blockdev write zeroes read split partial ...passed 00:10:02.158 Test: blockdev reset ...[2024-12-06 06:36:14.667134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:10:02.158 [2024-12-06 06:36:14.672255] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spassed 00:10:02.158 Test: blockdev write read 8 blocks ...uccessful. 00:10:02.158 passed 00:10:02.158 Test: blockdev write read size > 128k ...passed 00:10:02.158 Test: blockdev write read invalid size ...passed 00:10:02.158 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:02.158 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:02.158 Test: blockdev write read max offset ...passed 00:10:02.158 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:02.158 Test: blockdev writev readv 8 blocks ...passed 00:10:02.158 Test: blockdev writev readv 30 x 1block ...passed 00:10:02.158 Test: blockdev writev readv block ...passed 00:10:02.158 Test: blockdev writev readv size > 128k ...passed 00:10:02.158 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:02.158 Test: blockdev comparev and writev ...[2024-12-06 06:36:14.695277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2df434000 len:0x1000 00:10:02.158 [2024-12-06 06:36:14.695357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:02.158 passed 00:10:02.158 Test: blockdev nvme passthru rw ...passed 00:10:02.158 Test: blockdev nvme passthru vendor specific ...passed 00:10:02.158 Test: blockdev nvme admin passthru ...[2024-12-06 06:36:14.696944] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:02.158 [2024-12-06 06:36:14.696997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:02.158 passed 00:10:02.158 Test: blockdev copy ...passed 00:10:02.158 Suite: bdevio tests on: Nvme0n1 00:10:02.158 Test: blockdev write read block ...passed 00:10:02.158 Test: blockdev write zeroes read block ...passed 00:10:02.158 Test: blockdev write zeroes read no split ...passed 00:10:02.158 Test: blockdev write zeroes read split ...passed 00:10:02.158 Test: blockdev write zeroes read split partial ...passed 00:10:02.158 Test: blockdev reset ...[2024-12-06 06:36:14.767598] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:10:02.158 [2024-12-06 06:36:14.771530] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:10:02.158 passed 00:10:02.158 Test: blockdev write read 8 blocks ...passed 00:10:02.158 Test: blockdev write read size > 128k ...passed 00:10:02.158 Test: blockdev write read invalid size ...passed 00:10:02.158 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:02.158 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:02.158 Test: blockdev write read max offset ...passed 00:10:02.158 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:02.158 Test: blockdev writev readv 8 blocks ...passed 00:10:02.158 Test: blockdev writev readv 30 x 1block ...passed 00:10:02.158 Test: blockdev writev readv block ...passed 00:10:02.158 Test: blockdev writev readv size > 128k ...passed 00:10:02.158 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:02.158 Test: blockdev comparev and writev ...passed 00:10:02.158 Test: blockdev nvme passthru rw ...[2024-12-06 06:36:14.788078] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:10:02.158 separate metadata which is not supported yet. 00:10:02.158 passed 00:10:02.158 Test: blockdev nvme passthru vendor specific ...passed 00:10:02.158 Test: blockdev nvme admin passthru ...[2024-12-06 06:36:14.789731] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:10:02.158 [2024-12-06 06:36:14.789805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:10:02.158 passed 00:10:02.158 Test: blockdev copy ...passed 00:10:02.158 00:10:02.158 Run Summary: Type Total Ran Passed Failed Inactive 00:10:02.158 suites 6 6 n/a 0 0 00:10:02.158 tests 138 138 138 0 0 00:10:02.158 asserts 893 893 893 0 n/a 00:10:02.158 00:10:02.158 Elapsed time = 1.526 seconds 00:10:02.158 0 00:10:02.158 06:36:14 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 60169 00:10:02.158 06:36:14 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 60169 ']' 00:10:02.158 06:36:14 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 60169 00:10:02.158 06:36:14 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:10:02.158 06:36:14 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:02.158 06:36:14 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60169 00:10:02.158 06:36:14 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:02.158 06:36:14 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:02.158 killing process with pid 60169 00:10:02.158 06:36:14 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60169' 00:10:02.158 06:36:14 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 60169 00:10:02.158 06:36:14 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 60169 00:10:03.100 06:36:15 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:10:03.100 00:10:03.100 real 0m2.686s 00:10:03.100 user 0m6.555s 00:10:03.100 sys 0m0.449s 00:10:03.100 06:36:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:03.100 06:36:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:10:03.100 ************************************ 00:10:03.100 END TEST bdev_bounds 00:10:03.100 ************************************ 00:10:03.100 06:36:15 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:10:03.100 06:36:15 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:03.100 06:36:15 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:03.100 06:36:15 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:03.100 ************************************ 00:10:03.100 START TEST bdev_nbd 00:10:03.100 ************************************ 00:10:03.100 06:36:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:10:03.100 06:36:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:10:03.100 06:36:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:10:03.100 06:36:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:03.100 06:36:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:03.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:03.100 06:36:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:03.100 06:36:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:10:03.100 06:36:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:10:03.100 06:36:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:10:03.100 06:36:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:10:03.100 06:36:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:10:03.100 06:36:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:10:03.100 06:36:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:03.100 06:36:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:10:03.100 06:36:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:03.100 06:36:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:10:03.100 06:36:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=60223 00:10:03.100 06:36:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:10:03.100 06:36:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 60223 /var/tmp/spdk-nbd.sock 00:10:03.100 06:36:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 60223 ']' 00:10:03.100 06:36:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:03.100 06:36:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:03.100 06:36:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:03.100 06:36:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:10:03.100 06:36:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:03.100 06:36:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:10:03.100 [2024-12-06 06:36:15.814415] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:10:03.100 [2024-12-06 06:36:15.815506] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:03.361 [2024-12-06 06:36:15.982401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.620 [2024-12-06 06:36:16.132141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.189 06:36:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:04.189 06:36:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:10:04.189 06:36:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:10:04.189 06:36:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:04.189 06:36:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:04.189 06:36:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:10:04.189 06:36:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:10:04.189 06:36:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:04.189 06:36:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:04.189 06:36:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:10:04.189 06:36:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:10:04.189 06:36:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:10:04.189 06:36:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:10:04.189 06:36:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:04.189 06:36:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:10:04.469 06:36:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:10:04.469 06:36:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:10:04.469 06:36:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:10:04.470 06:36:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:04.470 06:36:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:04.470 06:36:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:04.470 06:36:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:04.470 06:36:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:04.470 06:36:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:04.470 06:36:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:04.470 06:36:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:04.470 06:36:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:04.470 1+0 records in 00:10:04.470 1+0 records out 00:10:04.470 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00107081 s, 3.8 MB/s 00:10:04.470 06:36:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:04.470 06:36:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:04.470 06:36:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:04.470 06:36:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:04.470 06:36:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:04.470 06:36:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:04.470 06:36:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:04.470 06:36:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:10:04.730 06:36:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:10:04.730 06:36:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:10:04.730 06:36:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:10:04.730 06:36:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:04.730 06:36:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:04.730 06:36:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:04.730 06:36:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:04.730 06:36:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:04.730 06:36:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:04.730 06:36:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:04.730 06:36:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:04.730 06:36:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:04.730 1+0 records in 00:10:04.730 1+0 records out 00:10:04.730 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00117229 s, 3.5 MB/s 00:10:04.730 06:36:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:04.730 06:36:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:04.730 06:36:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:04.730 06:36:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:04.730 06:36:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:04.730 06:36:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:04.730 06:36:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:04.730 06:36:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:10:04.993 06:36:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:10:04.993 06:36:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:10:04.993 06:36:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:10:04.993 06:36:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:10:04.993 06:36:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:04.993 06:36:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:04.993 06:36:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:04.993 06:36:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:10:04.993 06:36:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:04.993 06:36:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:04.993 06:36:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:04.993 06:36:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:04.993 1+0 records in 00:10:04.993 1+0 records out 00:10:04.993 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00107061 s, 3.8 MB/s 00:10:04.993 06:36:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:04.993 06:36:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:04.993 06:36:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:04.993 06:36:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:04.993 06:36:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:04.993 06:36:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:04.993 06:36:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:04.993 06:36:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:10:05.254 06:36:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:10:05.254 06:36:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:10:05.254 06:36:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:10:05.254 06:36:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:10:05.254 06:36:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:05.254 06:36:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:05.254 06:36:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:05.254 06:36:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:10:05.254 06:36:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:05.254 06:36:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:05.254 06:36:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:05.254 06:36:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:05.254 1+0 records in 00:10:05.254 1+0 records out 00:10:05.254 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00163386 s, 2.5 MB/s 00:10:05.254 06:36:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:05.254 06:36:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:05.254 06:36:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:05.254 06:36:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:05.254 06:36:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:05.254 06:36:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:05.254 06:36:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:05.254 06:36:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:10:05.515 06:36:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:10:05.515 06:36:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:10:05.515 06:36:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:10:05.515 06:36:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:10:05.515 06:36:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:05.515 06:36:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:05.515 06:36:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:05.515 06:36:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:10:05.515 06:36:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:05.515 06:36:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:05.515 06:36:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:05.515 06:36:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:05.515 1+0 records in 00:10:05.515 1+0 records out 00:10:05.515 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00112885 s, 3.6 MB/s 00:10:05.515 06:36:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:05.515 06:36:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:05.515 06:36:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:05.515 06:36:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:05.515 06:36:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:05.515 06:36:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:05.515 06:36:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:05.515 06:36:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:10:05.776 06:36:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:10:05.776 06:36:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:10:05.776 06:36:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:10:05.776 06:36:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:10:05.776 06:36:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:05.776 06:36:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:05.776 06:36:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:05.776 06:36:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:10:05.776 06:36:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:05.776 06:36:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:05.776 06:36:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:05.776 06:36:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:05.776 1+0 records in 00:10:05.776 1+0 records out 00:10:05.776 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000717881 s, 5.7 MB/s 00:10:05.776 06:36:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:05.776 06:36:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:05.776 06:36:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:05.776 06:36:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:05.776 06:36:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:05.776 06:36:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:05.776 06:36:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:05.776 06:36:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:06.038 06:36:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:10:06.038 { 00:10:06.038 "nbd_device": "/dev/nbd0", 00:10:06.038 "bdev_name": "Nvme0n1" 00:10:06.038 }, 00:10:06.038 { 00:10:06.038 "nbd_device": "/dev/nbd1", 00:10:06.038 "bdev_name": "Nvme1n1" 00:10:06.038 }, 00:10:06.038 { 00:10:06.038 "nbd_device": "/dev/nbd2", 00:10:06.038 "bdev_name": "Nvme2n1" 00:10:06.038 }, 00:10:06.038 { 00:10:06.038 "nbd_device": "/dev/nbd3", 00:10:06.038 "bdev_name": "Nvme2n2" 00:10:06.038 }, 00:10:06.038 { 00:10:06.038 "nbd_device": "/dev/nbd4", 00:10:06.038 "bdev_name": "Nvme2n3" 00:10:06.038 }, 00:10:06.038 { 00:10:06.038 "nbd_device": "/dev/nbd5", 00:10:06.038 "bdev_name": "Nvme3n1" 00:10:06.038 } 00:10:06.038 ]' 00:10:06.038 06:36:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:10:06.038 06:36:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:10:06.038 06:36:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:10:06.038 { 00:10:06.038 "nbd_device": "/dev/nbd0", 00:10:06.038 "bdev_name": "Nvme0n1" 00:10:06.038 }, 00:10:06.038 { 00:10:06.038 "nbd_device": "/dev/nbd1", 00:10:06.038 "bdev_name": "Nvme1n1" 00:10:06.038 }, 00:10:06.038 { 00:10:06.038 "nbd_device": "/dev/nbd2", 00:10:06.038 "bdev_name": "Nvme2n1" 00:10:06.038 }, 00:10:06.038 { 00:10:06.038 "nbd_device": "/dev/nbd3", 00:10:06.038 "bdev_name": "Nvme2n2" 00:10:06.038 }, 00:10:06.038 { 00:10:06.038 "nbd_device": "/dev/nbd4", 00:10:06.038 "bdev_name": "Nvme2n3" 00:10:06.038 }, 00:10:06.038 { 00:10:06.038 "nbd_device": "/dev/nbd5", 00:10:06.038 "bdev_name": "Nvme3n1" 00:10:06.038 } 00:10:06.038 ]' 00:10:06.038 06:36:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:10:06.038 06:36:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:06.038 06:36:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:10:06.038 06:36:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:06.038 06:36:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:06.038 06:36:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:06.038 06:36:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:06.299 06:36:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:06.299 06:36:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:06.299 06:36:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:06.299 06:36:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:06.299 06:36:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:06.299 06:36:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:06.299 06:36:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:06.299 06:36:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:06.299 06:36:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:06.299 06:36:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:06.560 06:36:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:06.560 06:36:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:06.560 06:36:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:06.560 06:36:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:06.560 06:36:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:06.560 06:36:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:06.560 06:36:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:06.560 06:36:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:06.560 06:36:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:06.560 06:36:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:10:06.560 06:36:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:10:06.560 06:36:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:10:06.560 06:36:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:10:06.560 06:36:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:06.560 06:36:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:06.560 06:36:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:10:06.821 06:36:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:06.821 06:36:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:06.821 06:36:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:06.821 06:36:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:10:06.821 06:36:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:10:06.821 06:36:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:10:06.821 06:36:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:10:06.821 06:36:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:06.821 06:36:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:06.821 06:36:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:10:06.821 06:36:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:06.821 06:36:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:06.821 06:36:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:06.821 06:36:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:10:07.083 06:36:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:10:07.083 06:36:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:10:07.083 06:36:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:10:07.083 06:36:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:07.083 06:36:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:07.083 06:36:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:10:07.083 06:36:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:07.083 06:36:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:07.083 06:36:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:07.083 06:36:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:10:07.359 06:36:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:10:07.359 06:36:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:10:07.359 06:36:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:10:07.359 06:36:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:07.359 06:36:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:07.359 06:36:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:10:07.359 06:36:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:07.359 06:36:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:07.359 06:36:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:07.359 06:36:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:07.359 06:36:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:07.657 06:36:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:07.657 06:36:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:07.657 06:36:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:07.657 06:36:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:07.657 06:36:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:10:07.657 06:36:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:07.657 06:36:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:10:07.657 06:36:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:10:07.657 06:36:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:10:07.657 06:36:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:10:07.657 06:36:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:10:07.657 06:36:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:10:07.657 06:36:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:10:07.657 06:36:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:07.657 06:36:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:07.657 06:36:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:07.657 06:36:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:07.657 06:36:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:07.657 06:36:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:10:07.657 06:36:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:07.657 06:36:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:07.657 06:36:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:07.657 06:36:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:07.657 06:36:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:07.657 06:36:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:10:07.657 06:36:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:07.657 06:36:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:07.657 06:36:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:10:07.920 /dev/nbd0 00:10:07.920 06:36:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:07.920 06:36:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:07.920 06:36:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:07.920 06:36:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:07.920 06:36:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:07.920 06:36:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:07.920 06:36:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:07.920 06:36:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:07.920 06:36:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:07.920 06:36:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:07.920 06:36:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:07.920 1+0 records in 00:10:07.920 1+0 records out 00:10:07.920 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106517 s, 3.8 MB/s 00:10:07.920 06:36:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:07.920 06:36:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:07.920 06:36:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:07.920 06:36:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:07.920 06:36:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:07.920 06:36:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:07.920 06:36:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:07.920 06:36:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:10:08.181 /dev/nbd1 00:10:08.181 06:36:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:08.181 06:36:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:08.181 06:36:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:08.181 06:36:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:08.181 06:36:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:08.181 06:36:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:08.181 06:36:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:08.181 06:36:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:08.181 06:36:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:08.181 06:36:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:08.181 06:36:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:08.181 1+0 records in 00:10:08.181 1+0 records out 00:10:08.181 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000757806 s, 5.4 MB/s 00:10:08.181 06:36:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:08.181 06:36:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:08.181 06:36:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:08.181 06:36:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:08.181 06:36:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:08.181 06:36:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:08.181 06:36:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:08.181 06:36:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:10:08.443 /dev/nbd10 00:10:08.443 06:36:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:10:08.443 06:36:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:10:08.443 06:36:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:10:08.443 06:36:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:08.443 06:36:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:08.443 06:36:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:08.443 06:36:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:10:08.443 06:36:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:08.443 06:36:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:08.443 06:36:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:08.443 06:36:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:08.443 1+0 records in 00:10:08.443 1+0 records out 00:10:08.443 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000998121 s, 4.1 MB/s 00:10:08.443 06:36:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:08.443 06:36:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:08.443 06:36:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:08.443 06:36:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:08.443 06:36:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:08.443 06:36:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:08.443 06:36:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:08.443 06:36:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:10:08.706 /dev/nbd11 00:10:08.706 06:36:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:10:08.706 06:36:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:10:08.706 06:36:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:10:08.706 06:36:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:08.706 06:36:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:08.706 06:36:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:08.706 06:36:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:10:08.706 06:36:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:08.706 06:36:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:08.706 06:36:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:08.706 06:36:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:08.706 1+0 records in 00:10:08.706 1+0 records out 00:10:08.706 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000897056 s, 4.6 MB/s 00:10:08.706 06:36:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:08.706 06:36:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:08.706 06:36:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:08.706 06:36:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:08.706 06:36:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:08.706 06:36:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:08.706 06:36:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:08.706 06:36:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:10:08.966 /dev/nbd12 00:10:08.966 06:36:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:10:08.966 06:36:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:10:08.966 06:36:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:10:08.966 06:36:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:08.966 06:36:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:08.966 06:36:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:08.966 06:36:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:10:08.966 06:36:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:08.966 06:36:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:08.966 06:36:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:08.966 06:36:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:08.966 1+0 records in 00:10:08.966 1+0 records out 00:10:08.966 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00092724 s, 4.4 MB/s 00:10:08.966 06:36:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:08.966 06:36:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:08.966 06:36:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:08.966 06:36:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:08.966 06:36:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:08.966 06:36:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:08.966 06:36:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:08.966 06:36:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:10:08.966 /dev/nbd13 00:10:09.228 06:36:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:10:09.228 06:36:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:10:09.228 06:36:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:10:09.228 06:36:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:09.228 06:36:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:09.228 06:36:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:09.228 06:36:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:10:09.228 06:36:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:09.228 06:36:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:09.228 06:36:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:09.228 06:36:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:09.228 1+0 records in 00:10:09.228 1+0 records out 00:10:09.228 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0010786 s, 3.8 MB/s 00:10:09.228 06:36:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:09.228 06:36:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:09.228 06:36:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:09.228 06:36:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:09.228 06:36:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:09.228 06:36:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:09.228 06:36:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:09.228 06:36:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:09.228 06:36:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:09.228 06:36:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:09.228 06:36:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:09.228 { 00:10:09.228 "nbd_device": "/dev/nbd0", 00:10:09.228 "bdev_name": "Nvme0n1" 00:10:09.228 }, 00:10:09.228 { 00:10:09.228 "nbd_device": "/dev/nbd1", 00:10:09.228 "bdev_name": "Nvme1n1" 00:10:09.228 }, 00:10:09.228 { 00:10:09.228 "nbd_device": "/dev/nbd10", 00:10:09.228 "bdev_name": "Nvme2n1" 00:10:09.228 }, 00:10:09.228 { 00:10:09.228 "nbd_device": "/dev/nbd11", 00:10:09.228 "bdev_name": "Nvme2n2" 00:10:09.228 }, 00:10:09.228 { 00:10:09.228 "nbd_device": "/dev/nbd12", 00:10:09.228 "bdev_name": "Nvme2n3" 00:10:09.228 }, 00:10:09.228 { 00:10:09.228 "nbd_device": "/dev/nbd13", 00:10:09.228 "bdev_name": "Nvme3n1" 00:10:09.228 } 00:10:09.228 ]' 00:10:09.228 06:36:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:09.228 { 00:10:09.228 "nbd_device": "/dev/nbd0", 00:10:09.228 "bdev_name": "Nvme0n1" 00:10:09.228 }, 00:10:09.228 { 00:10:09.228 "nbd_device": "/dev/nbd1", 00:10:09.228 "bdev_name": "Nvme1n1" 00:10:09.228 }, 00:10:09.228 { 00:10:09.228 "nbd_device": "/dev/nbd10", 00:10:09.228 "bdev_name": "Nvme2n1" 00:10:09.228 }, 00:10:09.228 { 00:10:09.228 "nbd_device": "/dev/nbd11", 00:10:09.228 "bdev_name": "Nvme2n2" 00:10:09.228 }, 00:10:09.228 { 00:10:09.228 "nbd_device": "/dev/nbd12", 00:10:09.228 "bdev_name": "Nvme2n3" 00:10:09.228 }, 00:10:09.228 { 00:10:09.228 "nbd_device": "/dev/nbd13", 00:10:09.228 "bdev_name": "Nvme3n1" 00:10:09.228 } 00:10:09.228 ]' 00:10:09.228 06:36:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:09.489 06:36:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:09.489 /dev/nbd1 00:10:09.489 /dev/nbd10 00:10:09.489 /dev/nbd11 00:10:09.489 /dev/nbd12 00:10:09.489 /dev/nbd13' 00:10:09.489 06:36:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:09.489 /dev/nbd1 00:10:09.489 /dev/nbd10 00:10:09.489 /dev/nbd11 00:10:09.489 /dev/nbd12 00:10:09.489 /dev/nbd13' 00:10:09.489 06:36:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:09.489 06:36:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:10:09.489 06:36:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:10:09.489 06:36:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:10:09.489 06:36:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:10:09.489 06:36:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:10:09.489 06:36:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:09.489 06:36:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:09.489 06:36:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:09.489 06:36:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:09.489 06:36:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:09.489 06:36:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:10:09.489 256+0 records in 00:10:09.489 256+0 records out 00:10:09.489 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00579693 s, 181 MB/s 00:10:09.489 06:36:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:09.489 06:36:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:09.489 256+0 records in 00:10:09.489 256+0 records out 00:10:09.489 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.196817 s, 5.3 MB/s 00:10:09.489 06:36:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:09.489 06:36:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:09.750 256+0 records in 00:10:09.750 256+0 records out 00:10:09.750 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.147002 s, 7.1 MB/s 00:10:09.750 06:36:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:09.750 06:36:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:10:10.010 256+0 records in 00:10:10.010 256+0 records out 00:10:10.010 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.203856 s, 5.1 MB/s 00:10:10.010 06:36:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:10.010 06:36:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:10:10.269 256+0 records in 00:10:10.269 256+0 records out 00:10:10.269 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.222822 s, 4.7 MB/s 00:10:10.269 06:36:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:10.269 06:36:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:10:10.529 256+0 records in 00:10:10.529 256+0 records out 00:10:10.529 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.247511 s, 4.2 MB/s 00:10:10.529 06:36:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:10.529 06:36:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:10:10.790 256+0 records in 00:10:10.790 256+0 records out 00:10:10.790 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.250752 s, 4.2 MB/s 00:10:10.790 06:36:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:10:10.790 06:36:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:10.790 06:36:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:10.790 06:36:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:10.790 06:36:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:10.790 06:36:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:10.790 06:36:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:10.790 06:36:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:10.790 06:36:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:10:10.790 06:36:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:10.790 06:36:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:10:10.790 06:36:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:10.790 06:36:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:10:10.790 06:36:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:10.790 06:36:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:10:10.790 06:36:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:10.790 06:36:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:10:10.790 06:36:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:10.790 06:36:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:10:10.790 06:36:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:10.790 06:36:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:10:10.790 06:36:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:10.790 06:36:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:10.790 06:36:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:10.790 06:36:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:10.790 06:36:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:10.790 06:36:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:11.052 06:36:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:11.052 06:36:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:11.052 06:36:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:11.052 06:36:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:11.052 06:36:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:11.052 06:36:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:11.052 06:36:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:11.052 06:36:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:11.052 06:36:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:11.052 06:36:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:11.312 06:36:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:11.312 06:36:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:11.312 06:36:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:11.312 06:36:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:11.312 06:36:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:11.312 06:36:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:11.312 06:36:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:11.312 06:36:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:11.312 06:36:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:11.312 06:36:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:10:11.312 06:36:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:10:11.312 06:36:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:10:11.312 06:36:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:10:11.312 06:36:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:11.312 06:36:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:11.312 06:36:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:10:11.312 06:36:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:11.312 06:36:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:11.312 06:36:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:11.312 06:36:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:10:11.572 06:36:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:10:11.572 06:36:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:10:11.572 06:36:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:10:11.572 06:36:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:11.572 06:36:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:11.572 06:36:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:10:11.572 06:36:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:11.572 06:36:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:11.572 06:36:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:11.572 06:36:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:10:11.896 06:36:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:10:11.896 06:36:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:10:11.896 06:36:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:10:11.896 06:36:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:11.896 06:36:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:11.896 06:36:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:10:11.896 06:36:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:11.896 06:36:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:11.896 06:36:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:11.896 06:36:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:10:12.155 06:36:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:10:12.155 06:36:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:10:12.155 06:36:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:10:12.155 06:36:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:12.155 06:36:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:12.155 06:36:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:10:12.156 06:36:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:12.156 06:36:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:12.156 06:36:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:12.156 06:36:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:12.156 06:36:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:12.416 06:36:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:12.416 06:36:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:12.416 06:36:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:12.416 06:36:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:12.416 06:36:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:10:12.416 06:36:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:12.416 06:36:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:10:12.416 06:36:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:10:12.416 06:36:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:10:12.416 06:36:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:10:12.416 06:36:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:12.416 06:36:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:10:12.416 06:36:24 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:10:12.416 06:36:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:12.416 06:36:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:10:12.416 06:36:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:10:12.677 malloc_lvol_verify 00:10:12.677 06:36:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:10:12.938 8203fafa-7a53-48c6-8b48-057dd53d3e1e 00:10:12.938 06:36:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:10:12.938 dd592ffd-aa9f-4328-9536-e69133036067 00:10:12.938 06:36:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:10:13.197 /dev/nbd0 00:10:13.197 06:36:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:10:13.197 06:36:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:10:13.197 06:36:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:10:13.197 06:36:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:10:13.197 06:36:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:10:13.197 mke2fs 1.47.0 (5-Feb-2023) 00:10:13.197 Discarding device blocks: 0/4096 done 00:10:13.197 Creating filesystem with 4096 1k blocks and 1024 inodes 00:10:13.197 00:10:13.197 Allocating group tables: 0/1 done 00:10:13.197 Writing inode tables: 0/1 done 00:10:13.197 Creating journal (1024 blocks): done 00:10:13.197 Writing superblocks and filesystem accounting information: 0/1 done 00:10:13.197 00:10:13.197 06:36:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:10:13.197 06:36:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:13.197 06:36:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:13.197 06:36:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:13.198 06:36:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:13.198 06:36:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:13.198 06:36:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:13.458 06:36:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:13.458 06:36:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:13.458 06:36:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:13.458 06:36:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:13.458 06:36:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:13.458 06:36:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:13.458 06:36:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:13.458 06:36:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:13.458 06:36:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 60223 00:10:13.458 06:36:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 60223 ']' 00:10:13.458 06:36:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 60223 00:10:13.458 06:36:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:10:13.458 06:36:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:13.458 06:36:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60223 00:10:13.719 06:36:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:13.719 06:36:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:13.719 killing process with pid 60223 00:10:13.719 06:36:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60223' 00:10:13.719 06:36:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 60223 00:10:13.719 06:36:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 60223 00:10:14.661 ************************************ 00:10:14.661 END TEST bdev_nbd 00:10:14.661 ************************************ 00:10:14.661 06:36:27 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:10:14.661 00:10:14.661 real 0m11.372s 00:10:14.661 user 0m15.575s 00:10:14.661 sys 0m3.661s 00:10:14.661 06:36:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:14.661 06:36:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:10:14.661 skipping fio tests on NVMe due to multi-ns failures. 00:10:14.661 06:36:27 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:10:14.661 06:36:27 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:10:14.661 06:36:27 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:10:14.661 06:36:27 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:14.661 06:36:27 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:14.661 06:36:27 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:10:14.661 06:36:27 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:14.661 06:36:27 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:14.661 ************************************ 00:10:14.661 START TEST bdev_verify 00:10:14.661 ************************************ 00:10:14.661 06:36:27 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:14.661 [2024-12-06 06:36:27.274206] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:10:14.661 [2024-12-06 06:36:27.274365] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60625 ] 00:10:14.923 [2024-12-06 06:36:27.440016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:14.923 [2024-12-06 06:36:27.582110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:14.923 [2024-12-06 06:36:27.582226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.495 Running I/O for 5 seconds... 00:10:17.830 17600.00 IOPS, 68.75 MiB/s [2024-12-06T06:36:31.516Z] 17504.00 IOPS, 68.38 MiB/s [2024-12-06T06:36:32.902Z] 17024.00 IOPS, 66.50 MiB/s [2024-12-06T06:36:33.473Z] 16928.00 IOPS, 66.12 MiB/s [2024-12-06T06:36:33.473Z] 16998.40 IOPS, 66.40 MiB/s 00:10:20.732 Latency(us) 00:10:20.732 [2024-12-06T06:36:33.473Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:20.732 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:20.732 Verification LBA range: start 0x0 length 0xbd0bd 00:10:20.732 Nvme0n1 : 5.06 1390.03 5.43 0.00 0.00 91775.68 21173.17 80659.69 00:10:20.732 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:20.732 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:10:20.732 Nvme0n1 : 5.06 1392.13 5.44 0.00 0.00 91663.55 24097.08 92758.65 00:10:20.732 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:20.732 Verification LBA range: start 0x0 length 0xa0000 00:10:20.732 Nvme1n1 : 5.07 1389.61 5.43 0.00 0.00 91703.27 24500.38 75416.81 00:10:20.732 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:20.732 Verification LBA range: start 0xa0000 length 0xa0000 00:10:20.732 Nvme1n1 : 5.06 1391.73 5.44 0.00 0.00 91358.92 25710.28 77836.60 00:10:20.732 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:20.732 Verification LBA range: start 0x0 length 0x80000 00:10:20.732 Nvme2n1 : 5.07 1389.17 5.43 0.00 0.00 91336.51 24399.56 73400.32 00:10:20.732 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:20.732 Verification LBA range: start 0x80000 length 0x80000 00:10:20.732 Nvme2n1 : 5.08 1397.55 5.46 0.00 0.00 90631.10 10687.41 73400.32 00:10:20.732 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:20.732 Verification LBA range: start 0x0 length 0x80000 00:10:20.732 Nvme2n2 : 5.07 1388.78 5.42 0.00 0.00 91172.18 26012.75 71383.83 00:10:20.732 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:20.732 Verification LBA range: start 0x80000 length 0x80000 00:10:20.732 Nvme2n2 : 5.11 1403.40 5.48 0.00 0.00 90266.69 18551.73 77030.01 00:10:20.732 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:20.732 Verification LBA range: start 0x0 length 0x80000 00:10:20.732 Nvme2n3 : 5.08 1397.59 5.46 0.00 0.00 90472.50 7662.67 76626.71 00:10:20.732 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:20.732 Verification LBA range: start 0x80000 length 0x80000 00:10:20.732 Nvme2n3 : 5.11 1402.97 5.48 0.00 0.00 90149.51 18955.03 77836.60 00:10:20.732 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:20.732 Verification LBA range: start 0x0 length 0x20000 00:10:20.732 Nvme3n1 : 5.09 1396.29 5.45 0.00 0.00 90381.17 9225.45 78643.20 00:10:20.732 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:20.732 Verification LBA range: start 0x20000 length 0x20000 00:10:20.732 Nvme3n1 : 5.11 1402.57 5.48 0.00 0.00 90021.48 18450.90 79449.80 00:10:20.732 [2024-12-06T06:36:33.473Z] =================================================================================================================== 00:10:20.732 [2024-12-06T06:36:33.473Z] Total : 16741.82 65.40 0.00 0.00 90906.65 7662.67 92758.65 00:10:22.117 ************************************ 00:10:22.117 END TEST bdev_verify 00:10:22.117 00:10:22.117 real 0m7.389s 00:10:22.117 user 0m13.552s 00:10:22.117 sys 0m0.343s 00:10:22.117 06:36:34 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:22.117 06:36:34 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:10:22.117 ************************************ 00:10:22.117 06:36:34 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:22.117 06:36:34 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:10:22.117 06:36:34 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:22.117 06:36:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:22.117 ************************************ 00:10:22.117 START TEST bdev_verify_big_io 00:10:22.117 ************************************ 00:10:22.117 06:36:34 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:22.117 [2024-12-06 06:36:34.718323] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:10:22.117 [2024-12-06 06:36:34.719204] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60723 ] 00:10:22.378 [2024-12-06 06:36:34.895804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:22.378 [2024-12-06 06:36:35.049679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:22.378 [2024-12-06 06:36:35.050017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.318 Running I/O for 5 seconds... 00:10:27.138 785.00 IOPS, 49.06 MiB/s [2024-12-06T06:36:41.781Z] 1727.50 IOPS, 107.97 MiB/s [2024-12-06T06:36:41.781Z] 2386.00 IOPS, 149.12 MiB/s [2024-12-06T06:36:42.041Z] 2369.50 IOPS, 148.09 MiB/s 00:10:29.300 Latency(us) 00:10:29.300 [2024-12-06T06:36:42.041Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:29.300 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:29.300 Verification LBA range: start 0x0 length 0xbd0b 00:10:29.300 Nvme0n1 : 5.47 140.33 8.77 0.00 0.00 879619.15 28634.19 916294.10 00:10:29.300 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:29.300 Verification LBA range: start 0xbd0b length 0xbd0b 00:10:29.300 Nvme0n1 : 5.72 117.55 7.35 0.00 0.00 1051257.83 22383.06 1071160.71 00:10:29.300 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:29.300 Verification LBA range: start 0x0 length 0xa000 00:10:29.300 Nvme1n1 : 5.69 146.10 9.13 0.00 0.00 826324.37 55251.89 758201.11 00:10:29.300 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:29.300 Verification LBA range: start 0xa000 length 0xa000 00:10:29.300 Nvme1n1 : 5.72 116.10 7.26 0.00 0.00 1025748.34 85095.98 896935.78 00:10:29.300 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:29.300 Verification LBA range: start 0x0 length 0x8000 00:10:29.300 Nvme2n1 : 5.77 151.04 9.44 0.00 0.00 780359.40 26617.70 771106.66 00:10:29.300 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:29.300 Verification LBA range: start 0x8000 length 0x8000 00:10:29.300 Nvme2n1 : 5.83 113.97 7.12 0.00 0.00 1015585.88 75013.51 1729343.80 00:10:29.300 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:29.300 Verification LBA range: start 0x0 length 0x8000 00:10:29.300 Nvme2n2 : 5.78 151.16 9.45 0.00 0.00 758174.20 27222.65 796917.76 00:10:29.300 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:29.300 Verification LBA range: start 0x8000 length 0x8000 00:10:29.300 Nvme2n2 : 5.85 117.87 7.37 0.00 0.00 952200.96 37103.46 1768060.46 00:10:29.300 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:29.300 Verification LBA range: start 0x0 length 0x8000 00:10:29.300 Nvme2n3 : 5.78 155.07 9.69 0.00 0.00 722643.10 48597.46 816276.09 00:10:29.300 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:29.300 Verification LBA range: start 0x8000 length 0x8000 00:10:29.300 Nvme2n3 : 5.89 127.97 8.00 0.00 0.00 848683.71 14317.10 1819682.66 00:10:29.300 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:29.300 Verification LBA range: start 0x0 length 0x2000 00:10:29.300 Nvme3n1 : 5.83 171.75 10.73 0.00 0.00 637797.71 1001.94 845313.58 00:10:29.300 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:29.300 Verification LBA range: start 0x2000 length 0x2000 00:10:29.300 Nvme3n1 : 5.94 164.15 10.26 0.00 0.00 650066.97 1077.56 1393799.48 00:10:29.300 [2024-12-06T06:36:42.041Z] =================================================================================================================== 00:10:29.300 [2024-12-06T06:36:42.041Z] Total : 1673.06 104.57 0.00 0.00 826431.47 1001.94 1819682.66 00:10:31.228 00:10:31.228 real 0m8.849s 00:10:31.228 user 0m16.454s 00:10:31.228 sys 0m0.375s 00:10:31.228 ************************************ 00:10:31.228 END TEST bdev_verify_big_io 00:10:31.228 ************************************ 00:10:31.228 06:36:43 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:31.228 06:36:43 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:10:31.228 06:36:43 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:31.228 06:36:43 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:10:31.228 06:36:43 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:31.228 06:36:43 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:31.228 ************************************ 00:10:31.228 START TEST bdev_write_zeroes 00:10:31.228 ************************************ 00:10:31.228 06:36:43 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:31.228 [2024-12-06 06:36:43.628838] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:10:31.228 [2024-12-06 06:36:43.628982] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60832 ] 00:10:31.228 [2024-12-06 06:36:43.791932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:31.228 [2024-12-06 06:36:43.928727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.166 Running I/O for 1 seconds... 00:10:33.104 42240.00 IOPS, 165.00 MiB/s 00:10:33.104 Latency(us) 00:10:33.104 [2024-12-06T06:36:45.845Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:33.104 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:33.104 Nvme0n1 : 1.03 7053.28 27.55 0.00 0.00 18089.67 6326.74 37708.41 00:10:33.104 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:33.104 Nvme1n1 : 1.03 7044.01 27.52 0.00 0.00 18087.00 12451.84 37506.76 00:10:33.104 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:33.104 Nvme2n1 : 1.03 7035.92 27.48 0.00 0.00 17942.63 12855.14 31457.28 00:10:33.104 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:33.104 Nvme2n2 : 1.03 7082.47 27.67 0.00 0.00 17795.16 7007.31 28230.89 00:10:33.104 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:33.104 Nvme2n3 : 1.03 7074.34 27.63 0.00 0.00 17755.42 7561.85 26819.35 00:10:33.104 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:33.104 Nvme3n1 : 1.03 7066.28 27.60 0.00 0.00 17703.83 7662.67 26416.05 00:10:33.104 [2024-12-06T06:36:45.845Z] =================================================================================================================== 00:10:33.104 [2024-12-06T06:36:45.845Z] Total : 42356.30 165.45 0.00 0.00 17894.98 6326.74 37708.41 00:10:34.046 00:10:34.046 real 0m2.918s 00:10:34.046 user 0m2.528s 00:10:34.046 sys 0m0.263s 00:10:34.046 ************************************ 00:10:34.046 END TEST bdev_write_zeroes 00:10:34.046 ************************************ 00:10:34.046 06:36:46 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:34.046 06:36:46 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:10:34.046 06:36:46 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:34.046 06:36:46 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:10:34.046 06:36:46 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:34.046 06:36:46 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:34.046 ************************************ 00:10:34.046 START TEST bdev_json_nonenclosed 00:10:34.046 ************************************ 00:10:34.046 06:36:46 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:34.046 [2024-12-06 06:36:46.632855] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:10:34.046 [2024-12-06 06:36:46.633042] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60885 ] 00:10:34.309 [2024-12-06 06:36:46.805872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:34.309 [2024-12-06 06:36:46.943630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.309 [2024-12-06 06:36:46.943733] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:10:34.309 [2024-12-06 06:36:46.943765] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:10:34.309 [2024-12-06 06:36:46.943776] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:34.571 00:10:34.571 real 0m0.597s 00:10:34.571 user 0m0.360s 00:10:34.571 sys 0m0.130s 00:10:34.571 ************************************ 00:10:34.571 END TEST bdev_json_nonenclosed 00:10:34.571 ************************************ 00:10:34.571 06:36:47 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:34.571 06:36:47 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:10:34.572 06:36:47 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:34.572 06:36:47 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:10:34.572 06:36:47 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:34.572 06:36:47 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:34.572 ************************************ 00:10:34.572 START TEST bdev_json_nonarray 00:10:34.572 ************************************ 00:10:34.572 06:36:47 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:34.572 [2024-12-06 06:36:47.292000] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:10:34.572 [2024-12-06 06:36:47.292164] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60916 ] 00:10:34.836 [2024-12-06 06:36:47.459089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:35.098 [2024-12-06 06:36:47.602194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.098 [2024-12-06 06:36:47.602321] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:10:35.098 [2024-12-06 06:36:47.602342] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:10:35.098 [2024-12-06 06:36:47.602353] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:35.098 00:10:35.098 real 0m0.594s 00:10:35.098 user 0m0.366s 00:10:35.098 sys 0m0.121s 00:10:35.098 ************************************ 00:10:35.098 END TEST bdev_json_nonarray 00:10:35.098 ************************************ 00:10:35.098 06:36:47 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:35.098 06:36:47 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:10:35.360 06:36:47 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:10:35.360 06:36:47 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:10:35.360 06:36:47 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:10:35.360 06:36:47 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:10:35.360 06:36:47 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:10:35.360 06:36:47 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:10:35.360 06:36:47 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:35.360 06:36:47 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:10:35.360 06:36:47 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:10:35.360 06:36:47 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:10:35.360 06:36:47 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:10:35.360 00:10:35.360 real 0m40.634s 00:10:35.360 user 1m0.649s 00:10:35.360 sys 0m6.548s 00:10:35.360 ************************************ 00:10:35.360 END TEST blockdev_nvme 00:10:35.360 ************************************ 00:10:35.360 06:36:47 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:35.360 06:36:47 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:35.360 06:36:47 -- spdk/autotest.sh@209 -- # uname -s 00:10:35.360 06:36:47 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:10:35.360 06:36:47 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:10:35.360 06:36:47 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:35.360 06:36:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:35.360 06:36:47 -- common/autotest_common.sh@10 -- # set +x 00:10:35.360 ************************************ 00:10:35.360 START TEST blockdev_nvme_gpt 00:10:35.360 ************************************ 00:10:35.360 06:36:47 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:10:35.360 * Looking for test storage... 00:10:35.360 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:10:35.360 06:36:48 blockdev_nvme_gpt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:35.360 06:36:48 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:35.360 06:36:48 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lcov --version 00:10:35.360 06:36:48 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:35.360 06:36:48 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:35.360 06:36:48 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:35.360 06:36:48 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:35.360 06:36:48 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:10:35.360 06:36:48 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:10:35.360 06:36:48 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:10:35.360 06:36:48 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:10:35.360 06:36:48 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:10:35.360 06:36:48 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:10:35.360 06:36:48 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:10:35.360 06:36:48 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:35.360 06:36:48 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:10:35.360 06:36:48 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:10:35.360 06:36:48 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:35.360 06:36:48 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:35.360 06:36:48 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:10:35.360 06:36:48 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:10:35.360 06:36:48 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:35.360 06:36:48 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:10:35.360 06:36:48 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:10:35.360 06:36:48 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:10:35.360 06:36:48 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:10:35.360 06:36:48 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:35.360 06:36:48 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:10:35.360 06:36:48 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:10:35.360 06:36:48 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:35.360 06:36:48 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:35.360 06:36:48 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:10:35.361 06:36:48 blockdev_nvme_gpt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:35.361 06:36:48 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:35.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.361 --rc genhtml_branch_coverage=1 00:10:35.361 --rc genhtml_function_coverage=1 00:10:35.361 --rc genhtml_legend=1 00:10:35.361 --rc geninfo_all_blocks=1 00:10:35.361 --rc geninfo_unexecuted_blocks=1 00:10:35.361 00:10:35.361 ' 00:10:35.361 06:36:48 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:35.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.361 --rc genhtml_branch_coverage=1 00:10:35.361 --rc genhtml_function_coverage=1 00:10:35.361 --rc genhtml_legend=1 00:10:35.361 --rc geninfo_all_blocks=1 00:10:35.361 --rc geninfo_unexecuted_blocks=1 00:10:35.361 00:10:35.361 ' 00:10:35.361 06:36:48 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:35.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.361 --rc genhtml_branch_coverage=1 00:10:35.361 --rc genhtml_function_coverage=1 00:10:35.361 --rc genhtml_legend=1 00:10:35.361 --rc geninfo_all_blocks=1 00:10:35.361 --rc geninfo_unexecuted_blocks=1 00:10:35.361 00:10:35.361 ' 00:10:35.361 06:36:48 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:35.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.361 --rc genhtml_branch_coverage=1 00:10:35.361 --rc genhtml_function_coverage=1 00:10:35.361 --rc genhtml_legend=1 00:10:35.361 --rc geninfo_all_blocks=1 00:10:35.361 --rc geninfo_unexecuted_blocks=1 00:10:35.361 00:10:35.361 ' 00:10:35.361 06:36:48 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:10:35.361 06:36:48 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:10:35.361 06:36:48 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:10:35.361 06:36:48 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:35.361 06:36:48 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:10:35.361 06:36:48 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:10:35.361 06:36:48 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:10:35.361 06:36:48 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:10:35.361 06:36:48 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:10:35.361 06:36:48 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:10:35.361 06:36:48 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:10:35.361 06:36:48 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:10:35.361 06:36:48 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:10:35.624 06:36:48 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:10:35.624 06:36:48 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:10:35.624 06:36:48 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:10:35.624 06:36:48 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:10:35.624 06:36:48 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:10:35.624 06:36:48 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:10:35.624 06:36:48 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:10:35.624 06:36:48 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:10:35.625 06:36:48 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:10:35.625 06:36:48 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:10:35.625 06:36:48 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:10:35.625 06:36:48 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61000 00:10:35.625 06:36:48 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:35.625 06:36:48 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 61000 00:10:35.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:35.625 06:36:48 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 61000 ']' 00:10:35.625 06:36:48 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:35.625 06:36:48 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:35.625 06:36:48 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:35.625 06:36:48 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:35.625 06:36:48 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:10:35.625 06:36:48 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:35.625 [2024-12-06 06:36:48.194514] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:10:35.625 [2024-12-06 06:36:48.194669] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61000 ] 00:10:35.625 [2024-12-06 06:36:48.360368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:35.884 [2024-12-06 06:36:48.503226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.821 06:36:49 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:36.821 06:36:49 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:10:36.821 06:36:49 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:10:36.821 06:36:49 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:10:36.821 06:36:49 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:37.091 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:37.091 Waiting for block devices as requested 00:10:37.091 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:37.349 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:37.349 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:37.349 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:42.718 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:42.718 06:36:55 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:10:42.718 06:36:55 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:10:42.718 06:36:55 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:10:42.718 06:36:55 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:10:42.718 06:36:55 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:10:42.718 06:36:55 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:10:42.718 06:36:55 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:10:42.718 06:36:55 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:10:42.718 06:36:55 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:10:42.718 06:36:55 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:10:42.718 06:36:55 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:10:42.718 06:36:55 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:10:42.718 06:36:55 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:42.718 06:36:55 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:10:42.718 06:36:55 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:10:42.718 06:36:55 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:10:42.718 06:36:55 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:10:42.718 06:36:55 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:10:42.718 06:36:55 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:10:42.718 06:36:55 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:42.718 06:36:55 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:10:42.718 06:36:55 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:10:42.718 06:36:55 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:10:42.718 06:36:55 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:10:42.718 06:36:55 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:10:42.718 06:36:55 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:10:42.718 06:36:55 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:42.718 06:36:55 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:10:42.718 06:36:55 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:10:42.718 06:36:55 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:10:42.718 06:36:55 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:10:42.718 06:36:55 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:42.718 06:36:55 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:10:42.718 06:36:55 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:10:42.718 06:36:55 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:10:42.718 06:36:55 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:10:42.718 06:36:55 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:42.718 06:36:55 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:10:42.718 06:36:55 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:10:42.718 06:36:55 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:10:42.718 06:36:55 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:10:42.718 06:36:55 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:10:42.718 06:36:55 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:10:42.718 06:36:55 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:42.718 06:36:55 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:10:42.718 06:36:55 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:10:42.718 06:36:55 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:10:42.718 06:36:55 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:10:42.718 06:36:55 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:10:42.718 06:36:55 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:10:42.718 06:36:55 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:10:42.718 06:36:55 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:10:42.718 BYT; 00:10:42.718 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:10:42.718 06:36:55 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:10:42.718 BYT; 00:10:42.718 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:10:42.718 06:36:55 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:10:42.718 06:36:55 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:10:42.718 06:36:55 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:10:42.718 06:36:55 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:10:42.718 06:36:55 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:10:42.718 06:36:55 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:10:42.718 06:36:55 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:10:42.718 06:36:55 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:10:42.718 06:36:55 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:10:42.718 06:36:55 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:42.718 06:36:55 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:10:42.718 06:36:55 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:10:42.718 06:36:55 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:42.718 06:36:55 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:10:42.718 06:36:55 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:10:42.718 06:36:55 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:10:42.718 06:36:55 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:10:42.718 06:36:55 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:10:42.718 06:36:55 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:10:42.718 06:36:55 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:10:42.718 06:36:55 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:42.718 06:36:55 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:10:42.718 06:36:55 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:10:42.718 06:36:55 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:42.718 06:36:55 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:10:42.718 06:36:55 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:10:42.718 06:36:55 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:10:42.718 06:36:55 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:10:42.718 06:36:55 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:10:43.648 The operation has completed successfully. 00:10:43.648 06:36:56 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:10:45.018 The operation has completed successfully. 00:10:45.018 06:36:57 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:45.275 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:45.840 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:45.840 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:45.840 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:45.840 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:45.840 06:36:58 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:10:45.840 06:36:58 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.840 06:36:58 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:45.840 [] 00:10:45.840 06:36:58 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.840 06:36:58 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:10:45.840 06:36:58 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:10:45.840 06:36:58 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:10:45.840 06:36:58 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:45.840 06:36:58 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:10:45.840 06:36:58 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.840 06:36:58 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:46.408 06:36:58 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.408 06:36:58 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:10:46.408 06:36:58 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.408 06:36:58 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:46.408 06:36:58 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.408 06:36:58 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:10:46.408 06:36:58 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:10:46.409 06:36:58 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.409 06:36:58 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:46.409 06:36:58 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.409 06:36:58 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:10:46.409 06:36:58 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.409 06:36:58 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:46.409 06:36:58 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.409 06:36:58 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:10:46.409 06:36:58 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.409 06:36:58 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:46.409 06:36:58 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.409 06:36:58 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:10:46.409 06:36:58 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:10:46.409 06:36:58 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.409 06:36:58 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:46.409 06:36:58 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:10:46.409 06:36:58 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.409 06:36:58 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:10:46.409 06:36:58 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:10:46.410 06:36:58 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "06a88835-e553-4679-9b83-cff8d11102ca"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "06a88835-e553-4679-9b83-cff8d11102ca",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "a3e3ae60-34b9-4744-bc11-51112190a158"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a3e3ae60-34b9-4744-bc11-51112190a158",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "efbc4f18-6bbc-4a74-b453-25f6b9b1405f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "efbc4f18-6bbc-4a74-b453-25f6b9b1405f",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "7e8f2d2a-023e-482e-808e-4b7d1c44cca1"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7e8f2d2a-023e-482e-808e-4b7d1c44cca1",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "e0156923-f904-4224-b109-92da80967a8e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "e0156923-f904-4224-b109-92da80967a8e",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:10:46.410 06:36:59 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:10:46.410 06:36:59 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:10:46.410 06:36:59 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:10:46.410 06:36:59 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 61000 00:10:46.410 06:36:59 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 61000 ']' 00:10:46.410 06:36:59 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 61000 00:10:46.410 06:36:59 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:10:46.410 06:36:59 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:46.410 06:36:59 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61000 00:10:46.410 killing process with pid 61000 00:10:46.410 06:36:59 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:46.410 06:36:59 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:46.410 06:36:59 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61000' 00:10:46.410 06:36:59 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 61000 00:10:46.410 06:36:59 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 61000 00:10:48.310 06:37:00 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:48.310 06:37:00 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:10:48.310 06:37:00 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:10:48.310 06:37:00 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:48.310 06:37:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:48.310 ************************************ 00:10:48.310 START TEST bdev_hello_world 00:10:48.310 ************************************ 00:10:48.310 06:37:00 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:10:48.310 [2024-12-06 06:37:00.642567] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:10:48.310 [2024-12-06 06:37:00.642694] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61624 ] 00:10:48.310 [2024-12-06 06:37:00.801517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.310 [2024-12-06 06:37:00.903788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.876 [2024-12-06 06:37:01.450101] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:10:48.877 [2024-12-06 06:37:01.450160] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:10:48.877 [2024-12-06 06:37:01.450183] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:10:48.877 [2024-12-06 06:37:01.452676] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:10:48.877 [2024-12-06 06:37:01.453607] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:10:48.877 [2024-12-06 06:37:01.453636] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:10:48.877 [2024-12-06 06:37:01.454075] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:10:48.877 00:10:48.877 [2024-12-06 06:37:01.454104] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:10:49.442 00:10:49.442 real 0m1.596s 00:10:49.442 user 0m1.309s 00:10:49.442 sys 0m0.181s 00:10:49.442 ************************************ 00:10:49.442 END TEST bdev_hello_world 00:10:49.442 06:37:02 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:49.442 06:37:02 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:10:49.442 ************************************ 00:10:49.700 06:37:02 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:10:49.700 06:37:02 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:49.700 06:37:02 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:49.700 06:37:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:49.700 ************************************ 00:10:49.700 START TEST bdev_bounds 00:10:49.700 ************************************ 00:10:49.700 06:37:02 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:10:49.700 Process bdevio pid: 61662 00:10:49.700 06:37:02 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61662 00:10:49.700 06:37:02 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:10:49.700 06:37:02 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61662' 00:10:49.700 06:37:02 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61662 00:10:49.700 06:37:02 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:10:49.700 06:37:02 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61662 ']' 00:10:49.700 06:37:02 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:49.700 06:37:02 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:49.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:49.700 06:37:02 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:49.700 06:37:02 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:49.700 06:37:02 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:10:49.700 [2024-12-06 06:37:02.301136] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:10:49.700 [2024-12-06 06:37:02.301269] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61662 ] 00:10:49.959 [2024-12-06 06:37:02.460362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:49.959 [2024-12-06 06:37:02.563944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:49.959 [2024-12-06 06:37:02.564175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:49.959 [2024-12-06 06:37:02.564292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.523 06:37:03 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:50.523 06:37:03 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:10:50.523 06:37:03 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:10:50.523 I/O targets: 00:10:50.523 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:10:50.523 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:10:50.523 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:10:50.523 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:50.523 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:50.523 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:50.523 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:10:50.523 00:10:50.523 00:10:50.523 CUnit - A unit testing framework for C - Version 2.1-3 00:10:50.523 http://cunit.sourceforge.net/ 00:10:50.523 00:10:50.523 00:10:50.523 Suite: bdevio tests on: Nvme3n1 00:10:50.523 Test: blockdev write read block ...passed 00:10:50.523 Test: blockdev write zeroes read block ...passed 00:10:50.523 Test: blockdev write zeroes read no split ...passed 00:10:50.781 Test: blockdev write zeroes read split ...passed 00:10:50.781 Test: blockdev write zeroes read split partial ...passed 00:10:50.781 Test: blockdev reset ...[2024-12-06 06:37:03.306388] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:10:50.781 passed 00:10:50.781 Test: blockdev write read 8 blocks ...[2024-12-06 06:37:03.309355] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:10:50.781 passed 00:10:50.781 Test: blockdev write read size > 128k ...passed 00:10:50.781 Test: blockdev write read invalid size ...passed 00:10:50.781 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:50.781 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:50.781 Test: blockdev write read max offset ...passed 00:10:50.781 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:50.781 Test: blockdev writev readv 8 blocks ...passed 00:10:50.781 Test: blockdev writev readv 30 x 1block ...passed 00:10:50.781 Test: blockdev writev readv block ...passed 00:10:50.781 Test: blockdev writev readv size > 128k ...passed 00:10:50.781 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:50.781 Test: blockdev comparev and writev ...[2024-12-06 06:37:03.319830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x29ce04000 len:0x1000 00:10:50.781 [2024-12-06 06:37:03.319919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:50.781 passed 00:10:50.781 Test: blockdev nvme passthru rw ...passed 00:10:50.781 Test: blockdev nvme passthru vendor specific ...[2024-12-06 06:37:03.320791] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:50.781 [2024-12-06 06:37:03.320870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:50.781 passed 00:10:50.781 Test: blockdev nvme admin passthru ...passed 00:10:50.781 Test: blockdev copy ...passed 00:10:50.781 Suite: bdevio tests on: Nvme2n3 00:10:50.781 Test: blockdev write read block ...passed 00:10:50.781 Test: blockdev write zeroes read block ...passed 00:10:50.781 Test: blockdev write zeroes read no split ...passed 00:10:50.781 Test: blockdev write zeroes read split ...passed 00:10:50.782 Test: blockdev write zeroes read split partial ...passed 00:10:50.782 Test: blockdev reset ...[2024-12-06 06:37:03.379759] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:10:50.782 [2024-12-06 06:37:03.382967] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:10:50.782 passed 00:10:50.782 Test: blockdev write read 8 blocks ...passed 00:10:50.782 Test: blockdev write read size > 128k ...passed 00:10:50.782 Test: blockdev write read invalid size ...passed 00:10:50.782 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:50.782 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:50.782 Test: blockdev write read max offset ...passed 00:10:50.782 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:50.782 Test: blockdev writev readv 8 blocks ...passed 00:10:50.782 Test: blockdev writev readv 30 x 1block ...passed 00:10:50.782 Test: blockdev writev readv block ...passed 00:10:50.782 Test: blockdev writev readv size > 128k ...passed 00:10:50.782 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:50.782 Test: blockdev comparev and writev ...passed 00:10:50.782 Test: blockdev nvme passthru rw ...[2024-12-06 06:37:03.388864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x29ce02000 len:0x1000 00:10:50.782 [2024-12-06 06:37:03.388909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:50.782 passed 00:10:50.782 Test: blockdev nvme passthru vendor specific ...passed 00:10:50.782 Test: blockdev nvme admin passthru ...[2024-12-06 06:37:03.389406] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:50.782 [2024-12-06 06:37:03.389430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:50.782 passed 00:10:50.782 Test: blockdev copy ...passed 00:10:50.782 Suite: bdevio tests on: Nvme2n2 00:10:50.782 Test: blockdev write read block ...passed 00:10:50.782 Test: blockdev write zeroes read block ...passed 00:10:50.782 Test: blockdev write zeroes read no split ...passed 00:10:50.782 Test: blockdev write zeroes read split ...passed 00:10:50.782 Test: blockdev write zeroes read split partial ...passed 00:10:50.782 Test: blockdev reset ...[2024-12-06 06:37:03.432422] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:10:50.782 [2024-12-06 06:37:03.435443] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:10:50.782 passed 00:10:50.782 Test: blockdev write read 8 blocks ...passed 00:10:50.782 Test: blockdev write read size > 128k ...passed 00:10:50.782 Test: blockdev write read invalid size ...passed 00:10:50.782 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:50.782 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:50.782 Test: blockdev write read max offset ...passed 00:10:50.782 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:50.782 Test: blockdev writev readv 8 blocks ...passed 00:10:50.782 Test: blockdev writev readv 30 x 1block ...passed 00:10:50.782 Test: blockdev writev readv block ...passed 00:10:50.782 Test: blockdev writev readv size > 128k ...passed 00:10:50.782 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:50.782 Test: blockdev comparev and writev ...passed 00:10:50.782 Test: blockdev nvme passthru rw ...[2024-12-06 06:37:03.441489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2e1a38000 len:0x1000 00:10:50.782 [2024-12-06 06:37:03.441524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:50.782 passed 00:10:50.782 Test: blockdev nvme passthru vendor specific ...passed 00:10:50.782 Test: blockdev nvme admin passthru ...[2024-12-06 06:37:03.442076] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:50.782 [2024-12-06 06:37:03.442098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:50.782 passed 00:10:50.782 Test: blockdev copy ...passed 00:10:50.782 Suite: bdevio tests on: Nvme2n1 00:10:50.782 Test: blockdev write read block ...passed 00:10:50.782 Test: blockdev write zeroes read block ...passed 00:10:50.782 Test: blockdev write zeroes read no split ...passed 00:10:50.782 Test: blockdev write zeroes read split ...passed 00:10:50.782 Test: blockdev write zeroes read split partial ...passed 00:10:50.782 Test: blockdev reset ...[2024-12-06 06:37:03.485681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:10:50.782 [2024-12-06 06:37:03.489692] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:10:50.782 passed 00:10:50.782 Test: blockdev write read 8 blocks ...passed 00:10:50.782 Test: blockdev write read size > 128k ...passed 00:10:50.782 Test: blockdev write read invalid size ...passed 00:10:50.782 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:50.782 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:50.782 Test: blockdev write read max offset ...passed 00:10:50.782 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:50.782 Test: blockdev writev readv 8 blocks ...passed 00:10:50.782 Test: blockdev writev readv 30 x 1block ...passed 00:10:50.782 Test: blockdev writev readv block ...passed 00:10:50.782 Test: blockdev writev readv size > 128k ...passed 00:10:50.782 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:50.782 Test: blockdev comparev and writev ...passed 00:10:50.782 Test: blockdev nvme passthru rw ...[2024-12-06 06:37:03.495734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2e1a34000 len:0x1000 00:10:50.782 [2024-12-06 06:37:03.495771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:50.782 passed 00:10:50.782 Test: blockdev nvme passthru vendor specific ...passed 00:10:50.782 Test: blockdev nvme admin passthru ...[2024-12-06 06:37:03.496323] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:50.782 [2024-12-06 06:37:03.496344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:50.782 passed 00:10:50.782 Test: blockdev copy ...passed 00:10:50.782 Suite: bdevio tests on: Nvme1n1p2 00:10:50.782 Test: blockdev write read block ...passed 00:10:50.782 Test: blockdev write zeroes read block ...passed 00:10:50.782 Test: blockdev write zeroes read no split ...passed 00:10:51.101 Test: blockdev write zeroes read split ...passed 00:10:51.101 Test: blockdev write zeroes read split partial ...passed 00:10:51.101 Test: blockdev reset ...[2024-12-06 06:37:03.553474] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:10:51.101 [2024-12-06 06:37:03.557377] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:10:51.101 passed 00:10:51.101 Test: blockdev write read 8 blocks ...passed 00:10:51.102 Test: blockdev write read size > 128k ...passed 00:10:51.102 Test: blockdev write read invalid size ...passed 00:10:51.102 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:51.102 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:51.102 Test: blockdev write read max offset ...passed 00:10:51.102 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:51.102 Test: blockdev writev readv 8 blocks ...passed 00:10:51.102 Test: blockdev writev readv 30 x 1block ...passed 00:10:51.102 Test: blockdev writev readv block ...passed 00:10:51.102 Test: blockdev writev readv size > 128k ...passed 00:10:51.102 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:51.102 Test: blockdev comparev and writev ...passed 00:10:51.102 Test: blockdev nvme passthru rw ...passed 00:10:51.102 Test: blockdev nvme passthru vendor specific ...passed 00:10:51.102 Test: blockdev nvme admin passthru ...passed 00:10:51.102 Test: blockdev copy ...[2024-12-06 06:37:03.564043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2e1a30000 len:0x1000 00:10:51.102 [2024-12-06 06:37:03.564078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:51.102 passed 00:10:51.102 Suite: bdevio tests on: Nvme1n1p1 00:10:51.102 Test: blockdev write read block ...passed 00:10:51.102 Test: blockdev write zeroes read block ...passed 00:10:51.102 Test: blockdev write zeroes read no split ...passed 00:10:51.102 Test: blockdev write zeroes read split ...passed 00:10:51.102 Test: blockdev write zeroes read split partial ...passed 00:10:51.102 Test: blockdev reset ...[2024-12-06 06:37:03.609080] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:10:51.102 passed 00:10:51.102 Test: blockdev write read 8 blocks ...[2024-12-06 06:37:03.611745] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:10:51.102 passed 00:10:51.102 Test: blockdev write read size > 128k ...passed 00:10:51.102 Test: blockdev write read invalid size ...passed 00:10:51.102 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:51.102 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:51.102 Test: blockdev write read max offset ...passed 00:10:51.102 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:51.102 Test: blockdev writev readv 8 blocks ...passed 00:10:51.102 Test: blockdev writev readv 30 x 1block ...passed 00:10:51.102 Test: blockdev writev readv block ...passed 00:10:51.102 Test: blockdev writev readv size > 128k ...passed 00:10:51.102 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:51.102 Test: blockdev comparev and writev ...[2024-12-06 06:37:03.618203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x29d00e000 len:0x1000 00:10:51.102 [2024-12-06 06:37:03.618236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:51.102 passed 00:10:51.102 Test: blockdev nvme passthru rw ...passed 00:10:51.102 Test: blockdev nvme passthru vendor specific ...passed 00:10:51.102 Test: blockdev nvme admin passthru ...passed 00:10:51.102 Test: blockdev copy ...passed 00:10:51.102 Suite: bdevio tests on: Nvme0n1 00:10:51.102 Test: blockdev write read block ...passed 00:10:51.102 Test: blockdev write zeroes read block ...passed 00:10:51.102 Test: blockdev write zeroes read no split ...passed 00:10:51.102 Test: blockdev write zeroes read split ...passed 00:10:51.102 Test: blockdev write zeroes read split partial ...passed 00:10:51.102 Test: blockdev reset ...[2024-12-06 06:37:03.662078] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:10:51.102 [2024-12-06 06:37:03.664790] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:10:51.102 passed 00:10:51.102 Test: blockdev write read 8 blocks ...passed 00:10:51.102 Test: blockdev write read size > 128k ...passed 00:10:51.102 Test: blockdev write read invalid size ...passed 00:10:51.102 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:51.102 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:51.102 Test: blockdev write read max offset ...passed 00:10:51.102 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:51.102 Test: blockdev writev readv 8 blocks ...passed 00:10:51.102 Test: blockdev writev readv 30 x 1block ...passed 00:10:51.102 Test: blockdev writev readv block ...passed 00:10:51.102 Test: blockdev writev readv size > 128k ...passed 00:10:51.102 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:51.102 Test: blockdev comparev and writev ...[2024-12-06 06:37:03.670265] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:10:51.102 separate metadata which is not supported yet. 00:10:51.102 passed 00:10:51.102 Test: blockdev nvme passthru rw ...passed 00:10:51.102 Test: blockdev nvme passthru vendor specific ...[2024-12-06 06:37:03.670779] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:10:51.102 [2024-12-06 06:37:03.670813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:10:51.102 passed 00:10:51.102 Test: blockdev nvme admin passthru ...passed 00:10:51.102 Test: blockdev copy ...passed 00:10:51.102 00:10:51.102 Run Summary: Type Total Ran Passed Failed Inactive 00:10:51.102 suites 7 7 n/a 0 0 00:10:51.102 tests 161 161 161 0 0 00:10:51.102 asserts 1025 1025 1025 0 n/a 00:10:51.102 00:10:51.102 Elapsed time = 1.134 seconds 00:10:51.102 0 00:10:51.102 06:37:03 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61662 00:10:51.102 06:37:03 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61662 ']' 00:10:51.102 06:37:03 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61662 00:10:51.102 06:37:03 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:10:51.102 06:37:03 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:51.102 06:37:03 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61662 00:10:51.102 killing process with pid 61662 00:10:51.102 06:37:03 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:51.102 06:37:03 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:51.102 06:37:03 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61662' 00:10:51.102 06:37:03 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61662 00:10:51.102 06:37:03 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61662 00:10:51.685 06:37:04 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:10:51.685 00:10:51.685 real 0m2.160s 00:10:51.685 user 0m5.522s 00:10:51.685 sys 0m0.308s 00:10:51.685 06:37:04 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:51.685 ************************************ 00:10:51.685 END TEST bdev_bounds 00:10:51.685 ************************************ 00:10:51.685 06:37:04 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:10:51.943 06:37:04 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:10:51.943 06:37:04 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:51.943 06:37:04 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:51.943 06:37:04 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:51.943 ************************************ 00:10:51.943 START TEST bdev_nbd 00:10:51.943 ************************************ 00:10:51.943 06:37:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:10:51.943 06:37:04 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:10:51.943 06:37:04 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:10:51.943 06:37:04 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:51.943 06:37:04 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:51.943 06:37:04 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:51.943 06:37:04 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:10:51.943 06:37:04 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:10:51.943 06:37:04 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:10:51.943 06:37:04 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:10:51.943 06:37:04 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:10:51.943 06:37:04 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:10:51.943 06:37:04 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:51.943 06:37:04 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:10:51.943 06:37:04 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:51.943 06:37:04 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:10:51.943 06:37:04 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61716 00:10:51.943 06:37:04 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:10:51.943 06:37:04 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61716 /var/tmp/spdk-nbd.sock 00:10:51.943 06:37:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61716 ']' 00:10:51.943 06:37:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:51.943 06:37:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:51.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:51.943 06:37:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:51.943 06:37:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:51.943 06:37:04 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:10:51.943 06:37:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:10:51.943 [2024-12-06 06:37:04.510789] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:10:51.943 [2024-12-06 06:37:04.510910] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:51.943 [2024-12-06 06:37:04.666536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.200 [2024-12-06 06:37:04.768722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.765 06:37:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:52.765 06:37:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:10:52.765 06:37:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:10:52.765 06:37:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:52.765 06:37:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:52.765 06:37:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:10:52.765 06:37:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:10:52.765 06:37:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:52.765 06:37:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:52.765 06:37:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:10:52.765 06:37:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:10:52.765 06:37:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:10:52.765 06:37:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:10:52.765 06:37:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:52.765 06:37:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:10:53.021 06:37:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:10:53.021 06:37:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:10:53.021 06:37:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:10:53.021 06:37:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:53.021 06:37:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:53.021 06:37:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:53.021 06:37:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:53.021 06:37:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:53.021 06:37:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:53.021 06:37:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:53.021 06:37:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:53.021 06:37:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:53.021 1+0 records in 00:10:53.021 1+0 records out 00:10:53.021 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000287848 s, 14.2 MB/s 00:10:53.021 06:37:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:53.021 06:37:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:53.021 06:37:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:53.021 06:37:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:53.021 06:37:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:53.021 06:37:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:53.021 06:37:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:53.021 06:37:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:10:53.278 06:37:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:10:53.278 06:37:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:10:53.278 06:37:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:10:53.278 06:37:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:53.278 06:37:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:53.278 06:37:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:53.278 06:37:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:53.278 06:37:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:53.278 06:37:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:53.278 06:37:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:53.278 06:37:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:53.278 06:37:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:53.278 1+0 records in 00:10:53.278 1+0 records out 00:10:53.278 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000379165 s, 10.8 MB/s 00:10:53.278 06:37:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:53.278 06:37:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:53.278 06:37:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:53.278 06:37:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:53.278 06:37:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:53.278 06:37:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:53.278 06:37:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:53.279 06:37:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:10:53.537 06:37:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:10:53.537 06:37:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:10:53.537 06:37:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:10:53.537 06:37:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:10:53.537 06:37:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:53.537 06:37:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:53.537 06:37:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:53.537 06:37:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:10:53.537 06:37:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:53.537 06:37:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:53.537 06:37:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:53.537 06:37:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:53.537 1+0 records in 00:10:53.537 1+0 records out 00:10:53.537 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000364343 s, 11.2 MB/s 00:10:53.537 06:37:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:53.537 06:37:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:53.537 06:37:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:53.537 06:37:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:53.537 06:37:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:53.537 06:37:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:53.537 06:37:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:53.537 06:37:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:10:53.795 06:37:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:10:53.795 06:37:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:10:53.795 06:37:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:10:53.795 06:37:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:10:53.795 06:37:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:53.795 06:37:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:53.795 06:37:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:53.795 06:37:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:10:53.795 06:37:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:53.795 06:37:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:53.795 06:37:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:53.795 06:37:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:53.795 1+0 records in 00:10:53.795 1+0 records out 00:10:53.795 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000436353 s, 9.4 MB/s 00:10:53.795 06:37:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:53.795 06:37:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:53.795 06:37:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:53.795 06:37:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:53.795 06:37:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:53.795 06:37:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:53.795 06:37:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:53.795 06:37:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:10:54.053 06:37:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:10:54.054 06:37:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:10:54.054 06:37:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:10:54.054 06:37:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:10:54.054 06:37:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:54.054 06:37:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:54.054 06:37:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:54.054 06:37:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:10:54.054 06:37:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:54.054 06:37:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:54.054 06:37:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:54.054 06:37:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:54.054 1+0 records in 00:10:54.054 1+0 records out 00:10:54.054 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000473046 s, 8.7 MB/s 00:10:54.054 06:37:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:54.054 06:37:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:54.054 06:37:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:54.054 06:37:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:54.054 06:37:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:54.054 06:37:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:54.054 06:37:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:54.054 06:37:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:10:54.311 06:37:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:10:54.311 06:37:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:10:54.311 06:37:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:10:54.311 06:37:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:10:54.311 06:37:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:54.311 06:37:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:54.311 06:37:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:54.311 06:37:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:10:54.311 06:37:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:54.311 06:37:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:54.311 06:37:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:54.311 06:37:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:54.311 1+0 records in 00:10:54.311 1+0 records out 00:10:54.311 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000502192 s, 8.2 MB/s 00:10:54.311 06:37:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:54.311 06:37:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:54.311 06:37:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:54.311 06:37:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:54.311 06:37:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:54.311 06:37:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:54.311 06:37:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:54.311 06:37:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:10:54.568 06:37:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:10:54.568 06:37:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:10:54.568 06:37:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:10:54.568 06:37:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:10:54.568 06:37:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:54.568 06:37:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:54.568 06:37:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:54.568 06:37:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:10:54.568 06:37:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:54.568 06:37:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:54.568 06:37:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:54.568 06:37:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:54.568 1+0 records in 00:10:54.568 1+0 records out 00:10:54.568 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000433205 s, 9.5 MB/s 00:10:54.568 06:37:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:54.568 06:37:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:54.568 06:37:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:54.568 06:37:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:54.568 06:37:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:54.568 06:37:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:54.568 06:37:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:54.568 06:37:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:54.568 06:37:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:10:54.568 { 00:10:54.568 "nbd_device": "/dev/nbd0", 00:10:54.568 "bdev_name": "Nvme0n1" 00:10:54.568 }, 00:10:54.568 { 00:10:54.568 "nbd_device": "/dev/nbd1", 00:10:54.568 "bdev_name": "Nvme1n1p1" 00:10:54.568 }, 00:10:54.568 { 00:10:54.568 "nbd_device": "/dev/nbd2", 00:10:54.568 "bdev_name": "Nvme1n1p2" 00:10:54.568 }, 00:10:54.568 { 00:10:54.568 "nbd_device": "/dev/nbd3", 00:10:54.568 "bdev_name": "Nvme2n1" 00:10:54.568 }, 00:10:54.568 { 00:10:54.568 "nbd_device": "/dev/nbd4", 00:10:54.568 "bdev_name": "Nvme2n2" 00:10:54.568 }, 00:10:54.568 { 00:10:54.568 "nbd_device": "/dev/nbd5", 00:10:54.568 "bdev_name": "Nvme2n3" 00:10:54.568 }, 00:10:54.568 { 00:10:54.568 "nbd_device": "/dev/nbd6", 00:10:54.568 "bdev_name": "Nvme3n1" 00:10:54.568 } 00:10:54.568 ]' 00:10:54.568 06:37:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:10:54.568 06:37:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:10:54.568 { 00:10:54.568 "nbd_device": "/dev/nbd0", 00:10:54.568 "bdev_name": "Nvme0n1" 00:10:54.568 }, 00:10:54.568 { 00:10:54.568 "nbd_device": "/dev/nbd1", 00:10:54.568 "bdev_name": "Nvme1n1p1" 00:10:54.569 }, 00:10:54.569 { 00:10:54.569 "nbd_device": "/dev/nbd2", 00:10:54.569 "bdev_name": "Nvme1n1p2" 00:10:54.569 }, 00:10:54.569 { 00:10:54.569 "nbd_device": "/dev/nbd3", 00:10:54.569 "bdev_name": "Nvme2n1" 00:10:54.569 }, 00:10:54.569 { 00:10:54.569 "nbd_device": "/dev/nbd4", 00:10:54.569 "bdev_name": "Nvme2n2" 00:10:54.569 }, 00:10:54.569 { 00:10:54.569 "nbd_device": "/dev/nbd5", 00:10:54.569 "bdev_name": "Nvme2n3" 00:10:54.569 }, 00:10:54.569 { 00:10:54.569 "nbd_device": "/dev/nbd6", 00:10:54.569 "bdev_name": "Nvme3n1" 00:10:54.569 } 00:10:54.569 ]' 00:10:54.569 06:37:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:10:54.827 06:37:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:10:54.827 06:37:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:54.827 06:37:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:10:54.827 06:37:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:54.827 06:37:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:54.827 06:37:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:54.827 06:37:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:54.827 06:37:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:54.827 06:37:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:54.827 06:37:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:54.827 06:37:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:54.827 06:37:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:54.827 06:37:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:54.827 06:37:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:54.827 06:37:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:54.827 06:37:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:54.827 06:37:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:55.210 06:37:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:55.210 06:37:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:55.210 06:37:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:55.210 06:37:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:55.210 06:37:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:55.210 06:37:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:55.210 06:37:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:55.210 06:37:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:55.210 06:37:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:55.210 06:37:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:10:55.468 06:37:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:10:55.468 06:37:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:10:55.468 06:37:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:10:55.468 06:37:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:55.468 06:37:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:55.468 06:37:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:10:55.468 06:37:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:55.468 06:37:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:55.468 06:37:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:55.468 06:37:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:10:55.468 06:37:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:10:55.468 06:37:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:10:55.468 06:37:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:10:55.468 06:37:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:55.468 06:37:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:55.468 06:37:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:10:55.468 06:37:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:55.468 06:37:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:55.468 06:37:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:55.468 06:37:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:10:55.726 06:37:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:10:55.726 06:37:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:10:55.726 06:37:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:10:55.726 06:37:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:55.726 06:37:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:55.726 06:37:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:10:55.726 06:37:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:55.726 06:37:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:55.726 06:37:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:55.726 06:37:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:10:55.985 06:37:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:10:55.985 06:37:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:10:55.985 06:37:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:10:55.985 06:37:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:55.985 06:37:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:55.985 06:37:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:10:55.985 06:37:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:55.985 06:37:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:55.985 06:37:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:55.985 06:37:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:10:56.243 06:37:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:10:56.243 06:37:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:10:56.243 06:37:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:10:56.243 06:37:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:56.243 06:37:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:56.243 06:37:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:10:56.243 06:37:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:56.243 06:37:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:56.243 06:37:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:56.243 06:37:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:56.243 06:37:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:56.501 06:37:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:56.501 06:37:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:56.501 06:37:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:56.501 06:37:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:56.501 06:37:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:10:56.501 06:37:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:56.501 06:37:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:10:56.501 06:37:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:10:56.501 06:37:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:10:56.501 06:37:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:10:56.501 06:37:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:10:56.501 06:37:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:10:56.501 06:37:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:10:56.501 06:37:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:56.501 06:37:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:56.501 06:37:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:56.501 06:37:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:56.501 06:37:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:56.501 06:37:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:10:56.501 06:37:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:56.501 06:37:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:56.501 06:37:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:56.501 06:37:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:56.501 06:37:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:56.501 06:37:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:10:56.501 06:37:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:56.501 06:37:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:56.502 06:37:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:10:56.760 /dev/nbd0 00:10:56.760 06:37:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:56.760 06:37:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:56.760 06:37:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:56.760 06:37:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:56.760 06:37:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:56.760 06:37:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:56.760 06:37:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:56.760 06:37:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:56.760 06:37:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:56.760 06:37:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:56.760 06:37:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:56.760 1+0 records in 00:10:56.760 1+0 records out 00:10:56.760 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000248464 s, 16.5 MB/s 00:10:56.760 06:37:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:56.760 06:37:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:56.761 06:37:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:56.761 06:37:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:56.761 06:37:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:56.761 06:37:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:56.761 06:37:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:56.761 06:37:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:10:57.020 /dev/nbd1 00:10:57.020 06:37:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:57.020 06:37:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:57.020 06:37:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:57.020 06:37:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:57.020 06:37:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:57.020 06:37:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:57.020 06:37:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:57.020 06:37:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:57.020 06:37:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:57.020 06:37:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:57.020 06:37:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:57.020 1+0 records in 00:10:57.020 1+0 records out 00:10:57.020 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000386097 s, 10.6 MB/s 00:10:57.020 06:37:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:57.020 06:37:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:57.020 06:37:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:57.020 06:37:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:57.020 06:37:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:57.020 06:37:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:57.020 06:37:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:57.020 06:37:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:10:57.020 /dev/nbd10 00:10:57.020 06:37:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:10:57.020 06:37:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:10:57.020 06:37:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:10:57.020 06:37:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:57.020 06:37:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:57.020 06:37:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:57.020 06:37:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:10:57.020 06:37:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:57.020 06:37:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:57.020 06:37:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:57.020 06:37:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:57.020 1+0 records in 00:10:57.020 1+0 records out 00:10:57.020 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000488067 s, 8.4 MB/s 00:10:57.278 06:37:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:57.278 06:37:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:57.278 06:37:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:57.278 06:37:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:57.278 06:37:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:57.278 06:37:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:57.278 06:37:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:57.278 06:37:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:10:57.278 /dev/nbd11 00:10:57.278 06:37:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:10:57.278 06:37:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:10:57.278 06:37:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:10:57.278 06:37:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:57.278 06:37:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:57.278 06:37:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:57.278 06:37:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:10:57.278 06:37:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:57.278 06:37:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:57.278 06:37:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:57.278 06:37:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:57.278 1+0 records in 00:10:57.278 1+0 records out 00:10:57.278 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000352099 s, 11.6 MB/s 00:10:57.278 06:37:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:57.278 06:37:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:57.278 06:37:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:57.278 06:37:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:57.278 06:37:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:57.278 06:37:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:57.278 06:37:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:57.278 06:37:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:10:57.536 /dev/nbd12 00:10:57.536 06:37:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:10:57.536 06:37:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:10:57.536 06:37:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:10:57.536 06:37:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:57.536 06:37:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:57.536 06:37:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:57.536 06:37:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:10:57.536 06:37:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:57.536 06:37:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:57.536 06:37:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:57.536 06:37:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:57.536 1+0 records in 00:10:57.536 1+0 records out 00:10:57.536 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000369661 s, 11.1 MB/s 00:10:57.536 06:37:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:57.536 06:37:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:57.536 06:37:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:57.536 06:37:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:57.536 06:37:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:57.536 06:37:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:57.536 06:37:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:57.536 06:37:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:10:57.794 /dev/nbd13 00:10:57.794 06:37:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:10:57.794 06:37:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:10:57.794 06:37:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:10:57.794 06:37:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:57.794 06:37:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:57.794 06:37:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:57.794 06:37:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:10:57.794 06:37:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:57.794 06:37:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:57.794 06:37:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:57.794 06:37:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:57.794 1+0 records in 00:10:57.794 1+0 records out 00:10:57.794 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000599064 s, 6.8 MB/s 00:10:57.794 06:37:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:57.794 06:37:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:57.794 06:37:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:57.794 06:37:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:57.794 06:37:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:57.794 06:37:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:57.794 06:37:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:57.794 06:37:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:10:58.052 /dev/nbd14 00:10:58.052 06:37:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:10:58.052 06:37:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:10:58.052 06:37:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:10:58.052 06:37:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:58.052 06:37:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:58.052 06:37:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:58.052 06:37:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:10:58.052 06:37:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:58.052 06:37:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:58.052 06:37:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:58.052 06:37:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:58.052 1+0 records in 00:10:58.052 1+0 records out 00:10:58.052 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000412361 s, 9.9 MB/s 00:10:58.052 06:37:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:58.052 06:37:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:58.052 06:37:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:58.052 06:37:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:58.052 06:37:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:58.052 06:37:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:58.053 06:37:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:58.053 06:37:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:58.053 06:37:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:58.053 06:37:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:58.310 06:37:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:58.311 { 00:10:58.311 "nbd_device": "/dev/nbd0", 00:10:58.311 "bdev_name": "Nvme0n1" 00:10:58.311 }, 00:10:58.311 { 00:10:58.311 "nbd_device": "/dev/nbd1", 00:10:58.311 "bdev_name": "Nvme1n1p1" 00:10:58.311 }, 00:10:58.311 { 00:10:58.311 "nbd_device": "/dev/nbd10", 00:10:58.311 "bdev_name": "Nvme1n1p2" 00:10:58.311 }, 00:10:58.311 { 00:10:58.311 "nbd_device": "/dev/nbd11", 00:10:58.311 "bdev_name": "Nvme2n1" 00:10:58.311 }, 00:10:58.311 { 00:10:58.311 "nbd_device": "/dev/nbd12", 00:10:58.311 "bdev_name": "Nvme2n2" 00:10:58.311 }, 00:10:58.311 { 00:10:58.311 "nbd_device": "/dev/nbd13", 00:10:58.311 "bdev_name": "Nvme2n3" 00:10:58.311 }, 00:10:58.311 { 00:10:58.311 "nbd_device": "/dev/nbd14", 00:10:58.311 "bdev_name": "Nvme3n1" 00:10:58.311 } 00:10:58.311 ]' 00:10:58.311 06:37:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:58.311 06:37:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:58.311 { 00:10:58.311 "nbd_device": "/dev/nbd0", 00:10:58.311 "bdev_name": "Nvme0n1" 00:10:58.311 }, 00:10:58.311 { 00:10:58.311 "nbd_device": "/dev/nbd1", 00:10:58.311 "bdev_name": "Nvme1n1p1" 00:10:58.311 }, 00:10:58.311 { 00:10:58.311 "nbd_device": "/dev/nbd10", 00:10:58.311 "bdev_name": "Nvme1n1p2" 00:10:58.311 }, 00:10:58.311 { 00:10:58.311 "nbd_device": "/dev/nbd11", 00:10:58.311 "bdev_name": "Nvme2n1" 00:10:58.311 }, 00:10:58.311 { 00:10:58.311 "nbd_device": "/dev/nbd12", 00:10:58.311 "bdev_name": "Nvme2n2" 00:10:58.311 }, 00:10:58.311 { 00:10:58.311 "nbd_device": "/dev/nbd13", 00:10:58.311 "bdev_name": "Nvme2n3" 00:10:58.311 }, 00:10:58.311 { 00:10:58.311 "nbd_device": "/dev/nbd14", 00:10:58.311 "bdev_name": "Nvme3n1" 00:10:58.311 } 00:10:58.311 ]' 00:10:58.311 06:37:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:58.311 /dev/nbd1 00:10:58.311 /dev/nbd10 00:10:58.311 /dev/nbd11 00:10:58.311 /dev/nbd12 00:10:58.311 /dev/nbd13 00:10:58.311 /dev/nbd14' 00:10:58.311 06:37:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:58.311 06:37:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:58.311 /dev/nbd1 00:10:58.311 /dev/nbd10 00:10:58.311 /dev/nbd11 00:10:58.311 /dev/nbd12 00:10:58.311 /dev/nbd13 00:10:58.311 /dev/nbd14' 00:10:58.311 06:37:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:10:58.311 06:37:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:10:58.311 06:37:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:10:58.311 06:37:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:10:58.311 06:37:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:10:58.311 06:37:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:58.311 06:37:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:58.311 06:37:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:58.311 06:37:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:58.311 06:37:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:58.311 06:37:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:10:58.311 256+0 records in 00:10:58.311 256+0 records out 00:10:58.311 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00934664 s, 112 MB/s 00:10:58.311 06:37:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:58.311 06:37:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:58.311 256+0 records in 00:10:58.311 256+0 records out 00:10:58.311 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0717962 s, 14.6 MB/s 00:10:58.311 06:37:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:58.311 06:37:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:58.569 256+0 records in 00:10:58.569 256+0 records out 00:10:58.569 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0738739 s, 14.2 MB/s 00:10:58.569 06:37:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:58.569 06:37:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:10:58.569 256+0 records in 00:10:58.569 256+0 records out 00:10:58.569 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.074382 s, 14.1 MB/s 00:10:58.569 06:37:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:58.569 06:37:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:10:58.569 256+0 records in 00:10:58.569 256+0 records out 00:10:58.569 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0745557 s, 14.1 MB/s 00:10:58.569 06:37:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:58.569 06:37:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:10:58.827 256+0 records in 00:10:58.827 256+0 records out 00:10:58.827 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0740399 s, 14.2 MB/s 00:10:58.827 06:37:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:58.827 06:37:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:10:58.827 256+0 records in 00:10:58.827 256+0 records out 00:10:58.827 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0727781 s, 14.4 MB/s 00:10:58.827 06:37:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:58.827 06:37:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:10:58.827 256+0 records in 00:10:58.827 256+0 records out 00:10:58.827 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0724293 s, 14.5 MB/s 00:10:58.827 06:37:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:10:58.827 06:37:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:58.827 06:37:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:58.827 06:37:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:58.827 06:37:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:58.827 06:37:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:58.827 06:37:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:58.827 06:37:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:58.827 06:37:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:10:58.827 06:37:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:58.827 06:37:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:10:58.827 06:37:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:58.827 06:37:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:10:58.827 06:37:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:58.827 06:37:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:10:58.827 06:37:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:58.827 06:37:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:10:58.827 06:37:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:58.827 06:37:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:10:58.827 06:37:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:58.827 06:37:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:10:58.827 06:37:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:58.827 06:37:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:10:58.827 06:37:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:58.827 06:37:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:58.827 06:37:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:58.827 06:37:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:58.827 06:37:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:58.827 06:37:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:59.085 06:37:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:59.085 06:37:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:59.085 06:37:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:59.085 06:37:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:59.085 06:37:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:59.085 06:37:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:59.085 06:37:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:59.085 06:37:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:59.085 06:37:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:59.085 06:37:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:59.342 06:37:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:59.342 06:37:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:59.342 06:37:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:59.342 06:37:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:59.342 06:37:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:59.342 06:37:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:59.342 06:37:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:59.342 06:37:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:59.342 06:37:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:59.342 06:37:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:10:59.600 06:37:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:10:59.600 06:37:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:10:59.600 06:37:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:10:59.600 06:37:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:59.600 06:37:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:59.600 06:37:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:10:59.600 06:37:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:59.600 06:37:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:59.600 06:37:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:59.600 06:37:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:10:59.860 06:37:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:10:59.860 06:37:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:10:59.860 06:37:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:10:59.860 06:37:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:59.860 06:37:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:59.860 06:37:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:10:59.860 06:37:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:59.860 06:37:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:59.860 06:37:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:59.860 06:37:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:11:00.118 06:37:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:11:00.118 06:37:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:11:00.118 06:37:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:11:00.118 06:37:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:00.118 06:37:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:00.118 06:37:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:11:00.118 06:37:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:00.118 06:37:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:00.118 06:37:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:00.118 06:37:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:11:00.118 06:37:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:11:00.118 06:37:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:11:00.118 06:37:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:11:00.118 06:37:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:00.118 06:37:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:00.118 06:37:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:11:00.118 06:37:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:00.375 06:37:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:00.375 06:37:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:00.375 06:37:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:11:00.375 06:37:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:11:00.375 06:37:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:11:00.375 06:37:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:11:00.375 06:37:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:00.375 06:37:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:00.375 06:37:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:11:00.375 06:37:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:00.375 06:37:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:00.375 06:37:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:00.375 06:37:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:00.375 06:37:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:00.631 06:37:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:00.631 06:37:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:00.631 06:37:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:00.631 06:37:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:00.631 06:37:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:11:00.631 06:37:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:00.631 06:37:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:11:00.631 06:37:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:11:00.631 06:37:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:11:00.631 06:37:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:11:00.631 06:37:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:00.631 06:37:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:11:00.631 06:37:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:11:00.631 06:37:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:00.631 06:37:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:11:00.631 06:37:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:11:00.887 malloc_lvol_verify 00:11:00.887 06:37:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:11:01.145 d3c7f437-86d9-4ab3-95cd-a0cd63131ab3 00:11:01.145 06:37:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:11:01.403 1be79860-296b-4327-af17-bb451886466f 00:11:01.403 06:37:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:11:01.661 /dev/nbd0 00:11:01.661 06:37:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:11:01.661 06:37:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:11:01.661 06:37:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:11:01.661 06:37:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:11:01.661 06:37:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:11:01.661 mke2fs 1.47.0 (5-Feb-2023) 00:11:01.661 Discarding device blocks: 0/4096 done 00:11:01.661 Creating filesystem with 4096 1k blocks and 1024 inodes 00:11:01.661 00:11:01.661 Allocating group tables: 0/1 done 00:11:01.661 Writing inode tables: 0/1 done 00:11:01.661 Creating journal (1024 blocks): done 00:11:01.661 Writing superblocks and filesystem accounting information: 0/1 done 00:11:01.661 00:11:01.661 06:37:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:11:01.661 06:37:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:01.661 06:37:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:01.661 06:37:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:01.661 06:37:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:11:01.661 06:37:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:01.661 06:37:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:01.661 06:37:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:01.661 06:37:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:01.661 06:37:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:01.661 06:37:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:01.661 06:37:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:01.661 06:37:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:01.661 06:37:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:01.661 06:37:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:01.661 06:37:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61716 00:11:01.661 06:37:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61716 ']' 00:11:01.661 06:37:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61716 00:11:01.661 06:37:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:11:01.661 06:37:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:01.661 06:37:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61716 00:11:01.661 06:37:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:01.661 killing process with pid 61716 00:11:01.661 06:37:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:01.661 06:37:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61716' 00:11:01.661 06:37:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61716 00:11:01.661 06:37:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61716 00:11:02.595 06:37:15 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:11:02.595 00:11:02.595 real 0m10.715s 00:11:02.595 user 0m15.472s 00:11:02.595 sys 0m3.433s 00:11:02.595 06:37:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:02.595 06:37:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:11:02.595 ************************************ 00:11:02.595 END TEST bdev_nbd 00:11:02.595 ************************************ 00:11:02.595 06:37:15 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:11:02.595 06:37:15 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:11:02.595 06:37:15 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:11:02.595 skipping fio tests on NVMe due to multi-ns failures. 00:11:02.595 06:37:15 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:11:02.595 06:37:15 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:11:02.595 06:37:15 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:11:02.595 06:37:15 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:11:02.595 06:37:15 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:02.595 06:37:15 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:02.595 ************************************ 00:11:02.595 START TEST bdev_verify 00:11:02.595 ************************************ 00:11:02.595 06:37:15 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:11:02.595 [2024-12-06 06:37:15.273371] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:11:02.595 [2024-12-06 06:37:15.273504] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62124 ] 00:11:02.852 [2024-12-06 06:37:15.431897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:02.852 [2024-12-06 06:37:15.532110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:02.852 [2024-12-06 06:37:15.532232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.419 Running I/O for 5 seconds... 00:11:05.728 20544.00 IOPS, 80.25 MiB/s [2024-12-06T06:37:19.398Z] 21504.00 IOPS, 84.00 MiB/s [2024-12-06T06:37:20.766Z] 22101.33 IOPS, 86.33 MiB/s [2024-12-06T06:37:21.331Z] 22736.00 IOPS, 88.81 MiB/s [2024-12-06T06:37:21.331Z] 23116.80 IOPS, 90.30 MiB/s 00:11:08.590 Latency(us) 00:11:08.590 [2024-12-06T06:37:21.331Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:08.590 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:08.590 Verification LBA range: start 0x0 length 0xbd0bd 00:11:08.590 Nvme0n1 : 5.06 1617.71 6.32 0.00 0.00 78949.13 16636.06 78239.90 00:11:08.590 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:08.590 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:11:08.590 Nvme0n1 : 5.09 1660.83 6.49 0.00 0.00 75933.86 8469.27 77433.30 00:11:08.590 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:08.590 Verification LBA range: start 0x0 length 0x4ff80 00:11:08.590 Nvme1n1p1 : 5.07 1617.20 6.32 0.00 0.00 78838.36 14821.22 75820.11 00:11:08.590 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:08.590 Verification LBA range: start 0x4ff80 length 0x4ff80 00:11:08.590 Nvme1n1p1 : 5.05 1648.74 6.44 0.00 0.00 77379.29 14216.27 80256.39 00:11:08.590 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:08.590 Verification LBA range: start 0x0 length 0x4ff7f 00:11:08.590 Nvme1n1p2 : 5.07 1616.16 6.31 0.00 0.00 78744.81 16636.06 72190.42 00:11:08.590 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:08.590 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:11:08.590 Nvme1n1p2 : 5.05 1648.25 6.44 0.00 0.00 77201.58 16131.94 76223.41 00:11:08.590 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:08.590 Verification LBA range: start 0x0 length 0x80000 00:11:08.590 Nvme2n1 : 5.07 1615.68 6.31 0.00 0.00 78608.63 17039.36 70980.53 00:11:08.590 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:08.590 Verification LBA range: start 0x80000 length 0x80000 00:11:08.590 Nvme2n1 : 5.05 1647.77 6.44 0.00 0.00 77064.90 16232.76 73803.62 00:11:08.590 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:08.590 Verification LBA range: start 0x0 length 0x80000 00:11:08.590 Nvme2n2 : 5.07 1615.22 6.31 0.00 0.00 78469.70 17442.66 67350.84 00:11:08.590 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:08.590 Verification LBA range: start 0x80000 length 0x80000 00:11:08.590 Nvme2n2 : 5.08 1662.88 6.50 0.00 0.00 76301.30 10637.00 68560.74 00:11:08.590 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:08.590 Verification LBA range: start 0x0 length 0x80000 00:11:08.590 Nvme2n3 : 5.07 1614.78 6.31 0.00 0.00 78317.56 16535.24 73803.62 00:11:08.590 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:08.590 Verification LBA range: start 0x80000 length 0x80000 00:11:08.590 Nvme2n3 : 5.08 1661.71 6.49 0.00 0.00 76181.50 10284.11 70173.93 00:11:08.590 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:08.590 Verification LBA range: start 0x0 length 0x20000 00:11:08.590 Nvme3n1 : 5.08 1625.01 6.35 0.00 0.00 77721.66 1940.87 77836.60 00:11:08.590 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:08.590 Verification LBA range: start 0x20000 length 0x20000 00:11:08.591 Nvme3n1 : 5.09 1661.27 6.49 0.00 0.00 76041.60 9729.58 75416.81 00:11:08.591 [2024-12-06T06:37:21.332Z] =================================================================================================================== 00:11:08.591 [2024-12-06T06:37:21.332Z] Total : 22913.22 89.50 0.00 0.00 77540.14 1940.87 80256.39 00:11:09.963 00:11:09.963 real 0m7.396s 00:11:09.963 user 0m13.879s 00:11:09.963 sys 0m0.212s 00:11:09.963 06:37:22 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:09.963 06:37:22 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:11:09.963 ************************************ 00:11:09.963 END TEST bdev_verify 00:11:09.963 ************************************ 00:11:09.963 06:37:22 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:11:09.963 06:37:22 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:11:09.963 06:37:22 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:09.963 06:37:22 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:09.963 ************************************ 00:11:09.963 START TEST bdev_verify_big_io 00:11:09.963 ************************************ 00:11:09.963 06:37:22 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:11:10.221 [2024-12-06 06:37:22.706403] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:11:10.221 [2024-12-06 06:37:22.706542] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62218 ] 00:11:10.221 [2024-12-06 06:37:22.867912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:10.479 [2024-12-06 06:37:22.983170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:10.479 [2024-12-06 06:37:22.983555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.043 Running I/O for 5 seconds... 00:11:16.855 368.00 IOPS, 23.00 MiB/s [2024-12-06T06:37:30.168Z] 2160.50 IOPS, 135.03 MiB/s [2024-12-06T06:37:30.168Z] 2715.67 IOPS, 169.73 MiB/s 00:11:17.427 Latency(us) 00:11:17.427 [2024-12-06T06:37:30.168Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:17.427 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:17.427 Verification LBA range: start 0x0 length 0xbd0b 00:11:17.427 Nvme0n1 : 5.98 88.42 5.53 0.00 0.00 1350002.53 27021.00 1471232.79 00:11:17.427 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:17.427 Verification LBA range: start 0xbd0b length 0xbd0b 00:11:17.427 Nvme0n1 : 6.17 75.16 4.70 0.00 0.00 1615653.08 13712.15 2322999.14 00:11:17.427 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:17.427 Verification LBA range: start 0x0 length 0x4ff8 00:11:17.427 Nvme1n1p1 : 6.05 91.15 5.70 0.00 0.00 1303867.59 104857.60 1277649.53 00:11:17.427 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:17.427 Verification LBA range: start 0x4ff8 length 0x4ff8 00:11:17.427 Nvme1n1p1 : 6.12 101.31 6.33 0.00 0.00 1174982.75 105664.20 1129235.69 00:11:17.427 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:17.427 Verification LBA range: start 0x0 length 0x4ff7 00:11:17.427 Nvme1n1p2 : 6.10 99.83 6.24 0.00 0.00 1166970.59 60494.77 1142141.24 00:11:17.427 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:17.427 Verification LBA range: start 0x4ff7 length 0x4ff7 00:11:17.427 Nvme1n1p2 : 6.12 99.30 6.21 0.00 0.00 1151324.91 123409.33 1122782.92 00:11:17.427 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:17.427 Verification LBA range: start 0x0 length 0x8000 00:11:17.427 Nvme2n1 : 6.10 101.12 6.32 0.00 0.00 1117791.36 61301.37 1161499.57 00:11:17.427 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:17.427 Verification LBA range: start 0x8000 length 0x8000 00:11:17.427 Nvme2n1 : 6.13 96.49 6.03 0.00 0.00 1160616.54 76223.41 2155226.98 00:11:17.427 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:17.427 Verification LBA range: start 0x0 length 0x8000 00:11:17.427 Nvme2n2 : 6.10 101.43 6.34 0.00 0.00 1076903.64 61704.66 1187310.67 00:11:17.427 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:17.427 Verification LBA range: start 0x8000 length 0x8000 00:11:17.427 Nvme2n2 : 6.14 101.57 6.35 0.00 0.00 1072570.56 15426.17 2206849.18 00:11:17.427 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:17.427 Verification LBA range: start 0x0 length 0x8000 00:11:17.427 Nvme2n3 : 6.13 108.08 6.76 0.00 0.00 981806.53 28835.84 1213121.77 00:11:17.427 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:17.427 Verification LBA range: start 0x8000 length 0x8000 00:11:17.427 Nvme2n3 : 6.18 107.16 6.70 0.00 0.00 982215.05 15426.17 2232660.28 00:11:17.427 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:17.427 Verification LBA range: start 0x0 length 0x2000 00:11:17.427 Nvme3n1 : 6.15 120.52 7.53 0.00 0.00 852882.62 5116.85 1297007.85 00:11:17.427 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:17.427 Verification LBA range: start 0x2000 length 0x2000 00:11:17.427 Nvme3n1 : 6.20 126.62 7.91 0.00 0.00 805617.29 2823.09 2271376.94 00:11:17.427 [2024-12-06T06:37:30.168Z] =================================================================================================================== 00:11:17.427 [2024-12-06T06:37:30.168Z] Total : 1418.16 88.64 0.00 0.00 1105432.92 2823.09 2322999.14 00:11:19.948 00:11:19.948 real 0m9.708s 00:11:19.948 user 0m18.054s 00:11:19.948 sys 0m0.259s 00:11:19.948 06:37:32 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:19.948 06:37:32 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:11:19.948 ************************************ 00:11:19.948 END TEST bdev_verify_big_io 00:11:19.948 ************************************ 00:11:19.948 06:37:32 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:19.948 06:37:32 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:11:19.948 06:37:32 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:19.948 06:37:32 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:19.948 ************************************ 00:11:19.948 START TEST bdev_write_zeroes 00:11:19.948 ************************************ 00:11:19.948 06:37:32 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:19.948 [2024-12-06 06:37:32.449715] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:11:19.948 [2024-12-06 06:37:32.449836] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62334 ] 00:11:19.948 [2024-12-06 06:37:32.607246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:20.207 [2024-12-06 06:37:32.728891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.788 Running I/O for 1 seconds... 00:11:21.720 30684.00 IOPS, 119.86 MiB/s 00:11:21.720 Latency(us) 00:11:21.720 [2024-12-06T06:37:34.461Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:21.720 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:21.720 Nvme0n1 : 1.02 4188.66 16.36 0.00 0.00 30496.52 11443.59 545259.52 00:11:21.720 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:21.720 Nvme1n1p1 : 1.03 4618.42 18.04 0.00 0.00 27616.21 11393.18 338770.71 00:11:21.720 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:21.720 Nvme1n1p2 : 1.03 4529.77 17.69 0.00 0.00 28097.59 11342.77 275856.15 00:11:21.720 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:21.720 Nvme2n1 : 1.03 4443.86 17.36 0.00 0.00 28568.10 9628.75 353289.45 00:11:21.720 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:21.720 Nvme2n2 : 1.03 4477.70 17.49 0.00 0.00 28266.48 10384.94 346836.68 00:11:21.720 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:21.720 Nvme2n3 : 1.03 4432.79 17.32 0.00 0.00 28461.84 11494.01 345223.48 00:11:21.720 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:21.720 Nvme3n1 : 1.03 4467.51 17.45 0.00 0.00 28158.82 10032.05 341997.10 00:11:21.720 [2024-12-06T06:37:34.461Z] =================================================================================================================== 00:11:21.720 [2024-12-06T06:37:34.461Z] Total : 31158.71 121.71 0.00 0.00 28499.89 9628.75 545259.52 00:11:22.655 00:11:22.655 real 0m2.726s 00:11:22.655 user 0m2.418s 00:11:22.655 sys 0m0.194s 00:11:22.655 06:37:35 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:22.655 06:37:35 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:11:22.655 ************************************ 00:11:22.655 END TEST bdev_write_zeroes 00:11:22.655 ************************************ 00:11:22.655 06:37:35 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:22.655 06:37:35 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:11:22.655 06:37:35 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:22.655 06:37:35 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:22.655 ************************************ 00:11:22.655 START TEST bdev_json_nonenclosed 00:11:22.655 ************************************ 00:11:22.655 06:37:35 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:22.655 [2024-12-06 06:37:35.215325] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:11:22.655 [2024-12-06 06:37:35.215451] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62387 ] 00:11:22.655 [2024-12-06 06:37:35.368454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:22.913 [2024-12-06 06:37:35.471658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.913 [2024-12-06 06:37:35.471745] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:11:22.913 [2024-12-06 06:37:35.471762] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:11:22.913 [2024-12-06 06:37:35.471772] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:23.169 00:11:23.169 real 0m0.502s 00:11:23.169 user 0m0.304s 00:11:23.169 sys 0m0.093s 00:11:23.169 06:37:35 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:23.169 ************************************ 00:11:23.169 END TEST bdev_json_nonenclosed 00:11:23.169 ************************************ 00:11:23.169 06:37:35 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:11:23.169 06:37:35 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:23.169 06:37:35 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:11:23.169 06:37:35 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:23.169 06:37:35 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:23.169 ************************************ 00:11:23.170 START TEST bdev_json_nonarray 00:11:23.170 ************************************ 00:11:23.170 06:37:35 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:23.170 [2024-12-06 06:37:35.757653] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:11:23.170 [2024-12-06 06:37:35.757884] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62413 ] 00:11:23.428 [2024-12-06 06:37:35.915968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.428 [2024-12-06 06:37:36.026925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.428 [2024-12-06 06:37:36.027033] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:11:23.428 [2024-12-06 06:37:36.027056] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:11:23.428 [2024-12-06 06:37:36.027067] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:23.686 ************************************ 00:11:23.686 END TEST bdev_json_nonarray 00:11:23.686 ************************************ 00:11:23.686 00:11:23.686 real 0m0.546s 00:11:23.686 user 0m0.349s 00:11:23.686 sys 0m0.092s 00:11:23.686 06:37:36 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:23.686 06:37:36 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:11:23.686 06:37:36 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:11:23.686 06:37:36 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:11:23.686 06:37:36 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:11:23.686 06:37:36 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:23.686 06:37:36 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:23.686 06:37:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:23.686 ************************************ 00:11:23.686 START TEST bdev_gpt_uuid 00:11:23.686 ************************************ 00:11:23.686 06:37:36 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:11:23.686 06:37:36 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:11:23.686 06:37:36 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:11:23.686 06:37:36 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62438 00:11:23.686 06:37:36 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:11:23.686 06:37:36 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:23.686 06:37:36 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 62438 00:11:23.686 06:37:36 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 62438 ']' 00:11:23.686 06:37:36 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:23.686 06:37:36 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:23.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:23.686 06:37:36 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:23.686 06:37:36 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:23.686 06:37:36 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:23.686 [2024-12-06 06:37:36.351222] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:11:23.686 [2024-12-06 06:37:36.351354] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62438 ] 00:11:23.945 [2024-12-06 06:37:36.510244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.945 [2024-12-06 06:37:36.624219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.510 06:37:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:24.510 06:37:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:11:24.510 06:37:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:24.510 06:37:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.510 06:37:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:25.075 Some configs were skipped because the RPC state that can call them passed over. 00:11:25.075 06:37:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.075 06:37:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:11:25.075 06:37:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.075 06:37:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:25.075 06:37:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.075 06:37:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:11:25.075 06:37:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.075 06:37:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:25.075 06:37:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.075 06:37:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:11:25.075 { 00:11:25.075 "name": "Nvme1n1p1", 00:11:25.075 "aliases": [ 00:11:25.075 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:11:25.075 ], 00:11:25.075 "product_name": "GPT Disk", 00:11:25.075 "block_size": 4096, 00:11:25.075 "num_blocks": 655104, 00:11:25.075 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:11:25.075 "assigned_rate_limits": { 00:11:25.075 "rw_ios_per_sec": 0, 00:11:25.075 "rw_mbytes_per_sec": 0, 00:11:25.075 "r_mbytes_per_sec": 0, 00:11:25.075 "w_mbytes_per_sec": 0 00:11:25.075 }, 00:11:25.075 "claimed": false, 00:11:25.075 "zoned": false, 00:11:25.075 "supported_io_types": { 00:11:25.075 "read": true, 00:11:25.075 "write": true, 00:11:25.075 "unmap": true, 00:11:25.075 "flush": true, 00:11:25.075 "reset": true, 00:11:25.075 "nvme_admin": false, 00:11:25.075 "nvme_io": false, 00:11:25.075 "nvme_io_md": false, 00:11:25.075 "write_zeroes": true, 00:11:25.075 "zcopy": false, 00:11:25.075 "get_zone_info": false, 00:11:25.075 "zone_management": false, 00:11:25.075 "zone_append": false, 00:11:25.075 "compare": true, 00:11:25.075 "compare_and_write": false, 00:11:25.075 "abort": true, 00:11:25.075 "seek_hole": false, 00:11:25.075 "seek_data": false, 00:11:25.075 "copy": true, 00:11:25.075 "nvme_iov_md": false 00:11:25.075 }, 00:11:25.075 "driver_specific": { 00:11:25.075 "gpt": { 00:11:25.075 "base_bdev": "Nvme1n1", 00:11:25.075 "offset_blocks": 256, 00:11:25.075 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:11:25.075 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:11:25.075 "partition_name": "SPDK_TEST_first" 00:11:25.075 } 00:11:25.075 } 00:11:25.075 } 00:11:25.075 ]' 00:11:25.075 06:37:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:11:25.075 06:37:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:11:25.075 06:37:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:11:25.075 06:37:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:11:25.075 06:37:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:11:25.075 06:37:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:11:25.075 06:37:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:11:25.075 06:37:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.075 06:37:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:25.075 06:37:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.075 06:37:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:11:25.075 { 00:11:25.075 "name": "Nvme1n1p2", 00:11:25.075 "aliases": [ 00:11:25.075 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:11:25.075 ], 00:11:25.075 "product_name": "GPT Disk", 00:11:25.075 "block_size": 4096, 00:11:25.075 "num_blocks": 655103, 00:11:25.075 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:11:25.075 "assigned_rate_limits": { 00:11:25.075 "rw_ios_per_sec": 0, 00:11:25.075 "rw_mbytes_per_sec": 0, 00:11:25.075 "r_mbytes_per_sec": 0, 00:11:25.075 "w_mbytes_per_sec": 0 00:11:25.075 }, 00:11:25.075 "claimed": false, 00:11:25.075 "zoned": false, 00:11:25.075 "supported_io_types": { 00:11:25.075 "read": true, 00:11:25.075 "write": true, 00:11:25.075 "unmap": true, 00:11:25.075 "flush": true, 00:11:25.075 "reset": true, 00:11:25.075 "nvme_admin": false, 00:11:25.075 "nvme_io": false, 00:11:25.075 "nvme_io_md": false, 00:11:25.075 "write_zeroes": true, 00:11:25.075 "zcopy": false, 00:11:25.075 "get_zone_info": false, 00:11:25.075 "zone_management": false, 00:11:25.075 "zone_append": false, 00:11:25.075 "compare": true, 00:11:25.075 "compare_and_write": false, 00:11:25.075 "abort": true, 00:11:25.075 "seek_hole": false, 00:11:25.075 "seek_data": false, 00:11:25.075 "copy": true, 00:11:25.075 "nvme_iov_md": false 00:11:25.075 }, 00:11:25.075 "driver_specific": { 00:11:25.075 "gpt": { 00:11:25.075 "base_bdev": "Nvme1n1", 00:11:25.075 "offset_blocks": 655360, 00:11:25.075 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:11:25.075 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:11:25.075 "partition_name": "SPDK_TEST_second" 00:11:25.075 } 00:11:25.075 } 00:11:25.075 } 00:11:25.075 ]' 00:11:25.075 06:37:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:11:25.075 06:37:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:11:25.075 06:37:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:11:25.075 06:37:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:11:25.075 06:37:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:11:25.075 06:37:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:11:25.075 06:37:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 62438 00:11:25.075 06:37:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 62438 ']' 00:11:25.075 06:37:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 62438 00:11:25.075 06:37:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:11:25.075 06:37:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:25.075 06:37:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62438 00:11:25.075 06:37:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:25.075 killing process with pid 62438 00:11:25.075 06:37:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:25.075 06:37:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62438' 00:11:25.075 06:37:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 62438 00:11:25.075 06:37:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 62438 00:11:26.971 00:11:26.971 real 0m3.092s 00:11:26.971 user 0m3.231s 00:11:26.971 sys 0m0.367s 00:11:26.971 06:37:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:26.971 06:37:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:26.971 ************************************ 00:11:26.971 END TEST bdev_gpt_uuid 00:11:26.971 ************************************ 00:11:26.971 06:37:39 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:11:26.971 06:37:39 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:11:26.971 06:37:39 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:11:26.971 06:37:39 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:11:26.971 06:37:39 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:26.971 06:37:39 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:11:26.971 06:37:39 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:11:26.971 06:37:39 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:11:26.971 06:37:39 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:26.971 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:27.230 Waiting for block devices as requested 00:11:27.230 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:27.230 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:27.230 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:27.488 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:32.827 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:32.827 06:37:45 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:11:32.827 06:37:45 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:11:32.827 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:11:32.827 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:11:32.827 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:11:32.827 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:11:32.827 06:37:45 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:11:32.827 00:11:32.827 real 0m57.543s 00:11:32.827 user 1m14.009s 00:11:32.827 sys 0m7.922s 00:11:32.827 06:37:45 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:32.827 ************************************ 00:11:32.827 END TEST blockdev_nvme_gpt 00:11:32.827 ************************************ 00:11:32.827 06:37:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:32.827 06:37:45 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:11:32.827 06:37:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:32.827 06:37:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:32.827 06:37:45 -- common/autotest_common.sh@10 -- # set +x 00:11:32.827 ************************************ 00:11:32.827 START TEST nvme 00:11:32.827 ************************************ 00:11:32.827 06:37:45 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:11:33.084 * Looking for test storage... 00:11:33.084 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:33.084 06:37:45 nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:33.084 06:37:45 nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:33.084 06:37:45 nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:11:33.084 06:37:45 nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:33.084 06:37:45 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:33.084 06:37:45 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:33.084 06:37:45 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:33.084 06:37:45 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:11:33.084 06:37:45 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:11:33.084 06:37:45 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:11:33.084 06:37:45 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:11:33.084 06:37:45 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:11:33.084 06:37:45 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:11:33.084 06:37:45 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:11:33.084 06:37:45 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:33.084 06:37:45 nvme -- scripts/common.sh@344 -- # case "$op" in 00:11:33.084 06:37:45 nvme -- scripts/common.sh@345 -- # : 1 00:11:33.084 06:37:45 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:33.084 06:37:45 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:33.084 06:37:45 nvme -- scripts/common.sh@365 -- # decimal 1 00:11:33.084 06:37:45 nvme -- scripts/common.sh@353 -- # local d=1 00:11:33.084 06:37:45 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:33.084 06:37:45 nvme -- scripts/common.sh@355 -- # echo 1 00:11:33.084 06:37:45 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:11:33.084 06:37:45 nvme -- scripts/common.sh@366 -- # decimal 2 00:11:33.085 06:37:45 nvme -- scripts/common.sh@353 -- # local d=2 00:11:33.085 06:37:45 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:33.085 06:37:45 nvme -- scripts/common.sh@355 -- # echo 2 00:11:33.085 06:37:45 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:11:33.085 06:37:45 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:33.085 06:37:45 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:33.085 06:37:45 nvme -- scripts/common.sh@368 -- # return 0 00:11:33.085 06:37:45 nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:33.085 06:37:45 nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:33.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.085 --rc genhtml_branch_coverage=1 00:11:33.085 --rc genhtml_function_coverage=1 00:11:33.085 --rc genhtml_legend=1 00:11:33.085 --rc geninfo_all_blocks=1 00:11:33.085 --rc geninfo_unexecuted_blocks=1 00:11:33.085 00:11:33.085 ' 00:11:33.085 06:37:45 nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:33.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.085 --rc genhtml_branch_coverage=1 00:11:33.085 --rc genhtml_function_coverage=1 00:11:33.085 --rc genhtml_legend=1 00:11:33.085 --rc geninfo_all_blocks=1 00:11:33.085 --rc geninfo_unexecuted_blocks=1 00:11:33.085 00:11:33.085 ' 00:11:33.085 06:37:45 nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:33.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.085 --rc genhtml_branch_coverage=1 00:11:33.085 --rc genhtml_function_coverage=1 00:11:33.085 --rc genhtml_legend=1 00:11:33.085 --rc geninfo_all_blocks=1 00:11:33.085 --rc geninfo_unexecuted_blocks=1 00:11:33.085 00:11:33.085 ' 00:11:33.085 06:37:45 nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:33.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.085 --rc genhtml_branch_coverage=1 00:11:33.085 --rc genhtml_function_coverage=1 00:11:33.085 --rc genhtml_legend=1 00:11:33.085 --rc geninfo_all_blocks=1 00:11:33.085 --rc geninfo_unexecuted_blocks=1 00:11:33.085 00:11:33.085 ' 00:11:33.085 06:37:45 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:33.649 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:34.214 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:34.214 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:34.214 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:34.214 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:34.214 06:37:46 nvme -- nvme/nvme.sh@79 -- # uname 00:11:34.214 06:37:46 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:11:34.214 06:37:46 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:11:34.214 06:37:46 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:11:34.214 06:37:46 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:11:34.214 06:37:46 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:11:34.214 06:37:46 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:11:34.214 Waiting for stub to ready for secondary processes... 00:11:34.214 06:37:46 nvme -- common/autotest_common.sh@1075 -- # stubpid=63079 00:11:34.214 06:37:46 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:11:34.214 06:37:46 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:11:34.214 06:37:46 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:11:34.214 06:37:46 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/63079 ]] 00:11:34.214 06:37:46 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:11:34.214 [2024-12-06 06:37:46.844648] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:11:34.214 [2024-12-06 06:37:46.844944] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:11:35.150 [2024-12-06 06:37:47.649132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:35.150 [2024-12-06 06:37:47.746806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:35.150 [2024-12-06 06:37:47.747139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:35.150 [2024-12-06 06:37:47.747167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:35.150 [2024-12-06 06:37:47.764581] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:11:35.150 [2024-12-06 06:37:47.764697] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:35.150 [2024-12-06 06:37:47.776527] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:11:35.150 [2024-12-06 06:37:47.776773] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:11:35.150 [2024-12-06 06:37:47.781804] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:35.150 [2024-12-06 06:37:47.782196] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:11:35.150 [2024-12-06 06:37:47.782321] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:11:35.150 [2024-12-06 06:37:47.786643] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:35.150 [2024-12-06 06:37:47.786828] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:11:35.150 [2024-12-06 06:37:47.786875] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:11:35.150 [2024-12-06 06:37:47.788472] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:35.150 [2024-12-06 06:37:47.788594] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:11:35.150 [2024-12-06 06:37:47.788640] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:11:35.150 [2024-12-06 06:37:47.788671] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:11:35.150 [2024-12-06 06:37:47.788702] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:11:35.150 done. 00:11:35.150 06:37:47 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:11:35.150 06:37:47 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:11:35.150 06:37:47 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:11:35.150 06:37:47 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:11:35.150 06:37:47 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:35.150 06:37:47 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:35.150 ************************************ 00:11:35.150 START TEST nvme_reset 00:11:35.150 ************************************ 00:11:35.150 06:37:47 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:11:35.412 Initializing NVMe Controllers 00:11:35.412 Skipping QEMU NVMe SSD at 0000:00:10.0 00:11:35.412 Skipping QEMU NVMe SSD at 0000:00:11.0 00:11:35.412 Skipping QEMU NVMe SSD at 0000:00:13.0 00:11:35.412 Skipping QEMU NVMe SSD at 0000:00:12.0 00:11:35.412 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:11:35.412 ************************************ 00:11:35.412 END TEST nvme_reset 00:11:35.412 ************************************ 00:11:35.412 00:11:35.412 real 0m0.205s 00:11:35.412 user 0m0.061s 00:11:35.412 sys 0m0.100s 00:11:35.412 06:37:48 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:35.412 06:37:48 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:11:35.412 06:37:48 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:11:35.412 06:37:48 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:35.412 06:37:48 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:35.412 06:37:48 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:35.412 ************************************ 00:11:35.412 START TEST nvme_identify 00:11:35.412 ************************************ 00:11:35.412 06:37:48 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:11:35.412 06:37:48 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:11:35.412 06:37:48 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:11:35.412 06:37:48 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:11:35.412 06:37:48 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:11:35.412 06:37:48 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:11:35.412 06:37:48 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:11:35.412 06:37:48 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:35.412 06:37:48 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:11:35.412 06:37:48 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:35.412 06:37:48 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:11:35.412 06:37:48 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:35.412 06:37:48 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:11:35.674 [2024-12-06 06:37:48.299772] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 63100 termina===================================================== 00:11:35.675 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:35.675 ===================================================== 00:11:35.675 Controller Capabilities/Features 00:11:35.675 ================================ 00:11:35.675 Vendor ID: 1b36 00:11:35.675 Subsystem Vendor ID: 1af4 00:11:35.675 Serial Number: 12340 00:11:35.675 Model Number: QEMU NVMe Ctrl 00:11:35.675 Firmware Version: 8.0.0 00:11:35.675 Recommended Arb Burst: 6 00:11:35.675 IEEE OUI Identifier: 00 54 52 00:11:35.675 Multi-path I/O 00:11:35.675 May have multiple subsystem ports: No 00:11:35.675 May have multiple controllers: No 00:11:35.675 Associated with SR-IOV VF: No 00:11:35.675 Max Data Transfer Size: 524288 00:11:35.675 Max Number of Namespaces: 256 00:11:35.675 Max Number of I/O Queues: 64 00:11:35.675 NVMe Specification Version (VS): 1.4 00:11:35.675 NVMe Specification Version (Identify): 1.4 00:11:35.675 Maximum Queue Entries: 2048 00:11:35.675 Contiguous Queues Required: Yes 00:11:35.675 Arbitration Mechanisms Supported 00:11:35.675 Weighted Round Robin: Not Supported 00:11:35.675 Vendor Specific: Not Supported 00:11:35.675 Reset Timeout: 7500 ms 00:11:35.675 Doorbell Stride: 4 bytes 00:11:35.675 NVM Subsystem Reset: Not Supported 00:11:35.675 Command Sets Supported 00:11:35.675 NVM Command Set: Supported 00:11:35.675 Boot Partition: Not Supported 00:11:35.675 Memory Page Size Minimum: 4096 bytes 00:11:35.675 Memory Page Size Maximum: 65536 bytes 00:11:35.675 Persistent Memory Region: Not Supported 00:11:35.675 Optional Asynchronous Events Supported 00:11:35.675 Namespace Attribute Notices: Supported 00:11:35.675 Firmware Activation Notices: Not Supported 00:11:35.675 ANA Change Notices: Not Supported 00:11:35.675 PLE Aggregate Log Change Notices: Not Supported 00:11:35.675 LBA Status Info Alert Notices: Not Supported 00:11:35.675 EGE Aggregate Log Change Notices: Not Supported 00:11:35.675 Normal NVM Subsystem Shutdown event: Not Supported 00:11:35.675 Zone Descriptor Change Notices: Not Supported 00:11:35.675 Discovery Log Change Notices: Not Supported 00:11:35.675 Controller Attributes 00:11:35.675 128-bit Host Identifier: Not Supported 00:11:35.675 Non-Operational Permissive Mode: Not Supported 00:11:35.675 NVM Sets: Not Supported 00:11:35.675 Read Recovery Levels: Not Supported 00:11:35.675 Endurance Groups: Not Supported 00:11:35.675 Predictable Latency Mode: Not Supported 00:11:35.675 Traffic Based Keep ALive: Not Supported 00:11:35.675 Namespace Granularity: Not Supported 00:11:35.675 SQ Associations: Not Supported 00:11:35.675 UUID List: Not Supported 00:11:35.675 Multi-Domain Subsystem: Not Supported 00:11:35.675 Fixed Capacity Management: Not Supported 00:11:35.675 Variable Capacity Management: Not Supported 00:11:35.675 Delete Endurance Group: Not Supported 00:11:35.675 Delete NVM Set: Not Supported 00:11:35.675 Extended LBA Formats Supported: Supported 00:11:35.675 Flexible Data Placement Supported: Not Supported 00:11:35.675 00:11:35.675 Controller Memory Buffer Support 00:11:35.675 ================================ 00:11:35.675 Supported: No 00:11:35.675 00:11:35.675 Persistent Memory Region Support 00:11:35.675 ================================ 00:11:35.675 Supported: No 00:11:35.675 00:11:35.675 Admin Command Set Attributes 00:11:35.675 ============================ 00:11:35.675 Security Send/Receive: Not Supported 00:11:35.675 Format NVM: Supported 00:11:35.675 Firmware Activate/Download: Not Supported 00:11:35.675 Namespace Management: Supported 00:11:35.675 Device Self-Test: Not Supported 00:11:35.675 Directives: Supported 00:11:35.675 NVMe-MI: Not Supported 00:11:35.675 Virtualization Management: Not Supported 00:11:35.675 Doorbell Buffer Config: Supported 00:11:35.675 Get LBA Status Capability: Not Supported 00:11:35.675 Command & Feature Lockdown Capability: Not Supported 00:11:35.675 Abort Command Limit: 4 00:11:35.675 Async Event Request Limit: 4 00:11:35.675 Number of Firmware Slots: N/A 00:11:35.675 Firmware Slot 1 Read-Only: N/A 00:11:35.675 Firmware Activation Without Reset: N/A 00:11:35.675 Multiple Update Detection Support: N/A 00:11:35.675 Firmware Update Granularity: No Information Provided 00:11:35.675 Per-Namespace SMART Log: Yes 00:11:35.675 Asymmetric Namespace Access Log Page: Not Supported 00:11:35.675 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:11:35.675 Command Effects Log Page: Supported 00:11:35.675 Get Log Page Extended Data: Supported 00:11:35.675 Telemetry Log Pages: Not Supported 00:11:35.675 Persistent Event Log Pages: Not Supported 00:11:35.675 Supported Log Pages Log Page: May Support 00:11:35.675 Commands Supported & Effects Log Page: Not Supported 00:11:35.675 Feature Identifiers & Effects Log Page:May Support 00:11:35.675 NVMe-MI Commands & Effects Log Page: May Support 00:11:35.675 Data Area 4 for Telemetry Log: Not Supported 00:11:35.675 Error Log Page Entries Supported: 1 00:11:35.675 Keep Alive: Not Supported 00:11:35.675 00:11:35.675 NVM Command Set Attributes 00:11:35.675 ========================== 00:11:35.675 Submission Queue Entry Size 00:11:35.675 Max: 64 00:11:35.675 Min: 64 00:11:35.675 Completion Queue Entry Size 00:11:35.675 Max: 16 00:11:35.675 Min: 16 00:11:35.675 Number of Namespaces: 256 00:11:35.675 Compare Command: Supported 00:11:35.675 Write Uncorrectable Command: Not Supported 00:11:35.675 Dataset Management Command: Supported 00:11:35.675 Write Zeroes Command: Supported 00:11:35.675 Set Features Save Field: Supported 00:11:35.675 Reservations: Not Supported 00:11:35.675 Timestamp: Supported 00:11:35.675 Copy: Supported 00:11:35.675 Volatile Write Cache: Present 00:11:35.675 Atomic Write Unit (Normal): 1 00:11:35.675 Atomic Write Unit (PFail): 1 00:11:35.675 Atomic Compare & Write Unit: 1 00:11:35.675 Fused Compare & Write: Not Supported 00:11:35.675 Scatter-Gather List 00:11:35.675 SGL Command Set: Supported 00:11:35.675 SGL Keyed: Not Supported 00:11:35.675 SGL Bit Bucket Descriptor: Not Supported 00:11:35.675 SGL Metadata Pointer: Not Supported 00:11:35.675 Oversized SGL: Not Supported 00:11:35.675 SGL Metadata Address: Not Supported 00:11:35.675 SGL Offset: Not Supported 00:11:35.675 Transport SGL Data Block: Not Supported 00:11:35.675 Replay Protected Memory Block: Not Supported 00:11:35.675 00:11:35.675 Firmware Slot Information 00:11:35.675 ========================= 00:11:35.675 Active slot: 1 00:11:35.675 Slot 1 Firmware Revision: 1.0 00:11:35.675 00:11:35.675 00:11:35.675 Commands Supported and Effects 00:11:35.675 ============================== 00:11:35.675 Admin Commands 00:11:35.675 -------------- 00:11:35.675 Delete I/O Submission Queue (00h): Supported 00:11:35.675 Create I/O Submission Queue (01h): Supported 00:11:35.675 Get Log Page (02h): Supported 00:11:35.675 Delete I/O Completion Queue (04h): Supported 00:11:35.675 Create I/O Completion Queue (05h): Supported 00:11:35.675 Identify (06h): Supported 00:11:35.675 Abort (08h): Supported 00:11:35.675 Set Features (09h): Supported 00:11:35.675 Get Features (0Ah): Supported 00:11:35.675 Asynchronous Event Request (0Ch): Supported 00:11:35.675 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:35.675 Directive Send (19h): Supported 00:11:35.675 Directive Receive (1Ah): Supported 00:11:35.675 Virtualization Management (1Ch): Supported 00:11:35.675 Doorbell Buffer Config (7Ch): Supported 00:11:35.675 Format NVM (80h): Supported LBA-Change 00:11:35.675 I/O Commands 00:11:35.675 ------------ 00:11:35.675 Flush (00h): Supported LBA-Change 00:11:35.675 Write (01h): Supported LBA-Change 00:11:35.675 Read (02h): Supported 00:11:35.675 Compare (05h): Supported 00:11:35.675 Write Zeroes (08h): Supported LBA-Change 00:11:35.675 Dataset Management (09h): Supported LBA-Change 00:11:35.675 Unknown (0Ch): Supported 00:11:35.675 Unknown (12h): Supported 00:11:35.675 Copy (19h): Supported LBA-Change 00:11:35.676 Unknown (1Dh): Supported LBA-Change 00:11:35.676 00:11:35.676 Error Log 00:11:35.676 ========= 00:11:35.676 00:11:35.676 Arbitration 00:11:35.676 =========== 00:11:35.676 Arbitration Burst: no limit 00:11:35.676 00:11:35.676 Power Management 00:11:35.676 ================ 00:11:35.676 Number of Power States: 1 00:11:35.676 Current Power State: Power State #0 00:11:35.676 Power State #0: 00:11:35.676 Max Power: 25.00 W 00:11:35.676 Non-Operational State: Operational 00:11:35.676 Entry Latency: 16 microseconds 00:11:35.676 Exit Latency: 4 microseconds 00:11:35.676 Relative Read Throughput: 0 00:11:35.676 Relative Read Latency: 0 00:11:35.676 Relative Write Throughput: 0 00:11:35.676 Relative Write Latency: 0 00:11:35.676 Idle Powerted unexpected 00:11:35.676 [2024-12-06 06:37:48.300626] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 63100 terminated unexpected 00:11:35.676 : Not Reported 00:11:35.676 Active Power: Not Reported 00:11:35.676 Non-Operational Permissive Mode: Not Supported 00:11:35.676 00:11:35.676 Health Information 00:11:35.676 ================== 00:11:35.676 Critical Warnings: 00:11:35.676 Available Spare Space: OK 00:11:35.676 Temperature: OK 00:11:35.676 Device Reliability: OK 00:11:35.676 Read Only: No 00:11:35.676 Volatile Memory Backup: OK 00:11:35.676 Current Temperature: 323 Kelvin (50 Celsius) 00:11:35.676 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:35.676 Available Spare: 0% 00:11:35.676 Available Spare Threshold: 0% 00:11:35.676 Life Percentage Used: 0% 00:11:35.676 Data Units Read: 638 00:11:35.676 Data Units Written: 566 00:11:35.676 Host Read Commands: 34165 00:11:35.676 Host Write Commands: 33951 00:11:35.676 Controller Busy Time: 0 minutes 00:11:35.676 Power Cycles: 0 00:11:35.676 Power On Hours: 0 hours 00:11:35.676 Unsafe Shutdowns: 0 00:11:35.676 Unrecoverable Media Errors: 0 00:11:35.676 Lifetime Error Log Entries: 0 00:11:35.676 Warning Temperature Time: 0 minutes 00:11:35.676 Critical Temperature Time: 0 minutes 00:11:35.676 00:11:35.676 Number of Queues 00:11:35.676 ================ 00:11:35.676 Number of I/O Submission Queues: 64 00:11:35.676 Number of I/O Completion Queues: 64 00:11:35.676 00:11:35.676 ZNS Specific Controller Data 00:11:35.676 ============================ 00:11:35.676 Zone Append Size Limit: 0 00:11:35.676 00:11:35.676 00:11:35.676 Active Namespaces 00:11:35.676 ================= 00:11:35.676 Namespace ID:1 00:11:35.676 Error Recovery Timeout: Unlimited 00:11:35.676 Command Set Identifier: NVM (00h) 00:11:35.676 Deallocate: Supported 00:11:35.676 Deallocated/Unwritten Error: Supported 00:11:35.676 Deallocated Read Value: All 0x00 00:11:35.676 Deallocate in Write Zeroes: Not Supported 00:11:35.676 Deallocated Guard Field: 0xFFFF 00:11:35.676 Flush: Supported 00:11:35.676 Reservation: Not Supported 00:11:35.676 Metadata Transferred as: Separate Metadata Buffer 00:11:35.676 Namespace Sharing Capabilities: Private 00:11:35.676 Size (in LBAs): 1548666 (5GiB) 00:11:35.676 Capacity (in LBAs): 1548666 (5GiB) 00:11:35.676 Utilization (in LBAs): 1548666 (5GiB) 00:11:35.676 Thin Provisioning: Not Supported 00:11:35.676 Per-NS Atomic Units: No 00:11:35.676 Maximum Single Source Range Length: 128 00:11:35.676 Maximum Copy Length: 128 00:11:35.676 Maximum Source Range Count: 128 00:11:35.676 NGUID/EUI64 Never Reused: No 00:11:35.676 Namespace Write Protected: No 00:11:35.676 Number of LBA Formats: 8 00:11:35.676 Current LBA Format: LBA Format #07 00:11:35.676 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:35.676 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:35.676 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:35.676 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:35.676 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:35.676 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:35.676 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:35.676 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:35.676 00:11:35.676 NVM Specific Namespace Data 00:11:35.676 =========================== 00:11:35.676 Logical Block Storage Tag Mask: 0 00:11:35.676 Protection Information Capabilities: 00:11:35.676 16b Guard Protection Information Storage Tag Support: No 00:11:35.676 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:35.676 Storage Tag Check Read Support: No 00:11:35.676 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:35.676 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:35.676 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:35.676 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:35.676 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:35.676 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:35.676 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:35.676 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:35.676 ===================================================== 00:11:35.676 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:35.676 ===================================================== 00:11:35.676 Controller Capabilities/Features 00:11:35.676 ================================ 00:11:35.676 Vendor ID: 1b36 00:11:35.676 Subsystem Vendor ID: 1af4 00:11:35.676 Serial Number: 12341 00:11:35.676 Model Number: QEMU NVMe Ctrl 00:11:35.676 Firmware Version: 8.0.0 00:11:35.676 Recommended Arb Burst: 6 00:11:35.676 IEEE OUI Identifier: 00 54 52 00:11:35.676 Multi-path I/O 00:11:35.676 May have multiple subsystem ports: No 00:11:35.676 May have multiple controllers: No 00:11:35.676 Associated with SR-IOV VF: No 00:11:35.676 Max Data Transfer Size: 524288 00:11:35.676 Max Number of Namespaces: 256 00:11:35.676 Max Number of I/O Queues: 64 00:11:35.676 NVMe Specification Version (VS): 1.4 00:11:35.676 NVMe Specification Version (Identify): 1.4 00:11:35.676 Maximum Queue Entries: 2048 00:11:35.676 Contiguous Queues Required: Yes 00:11:35.676 Arbitration Mechanisms Supported 00:11:35.676 Weighted Round Robin: Not Supported 00:11:35.676 Vendor Specific: Not Supported 00:11:35.676 Reset Timeout: 7500 ms 00:11:35.676 Doorbell Stride: 4 bytes 00:11:35.676 NVM Subsystem Reset: Not Supported 00:11:35.676 Command Sets Supported 00:11:35.676 NVM Command Set: Supported 00:11:35.676 Boot Partition: Not Supported 00:11:35.676 Memory Page Size Minimum: 4096 bytes 00:11:35.676 Memory Page Size Maximum: 65536 bytes 00:11:35.676 Persistent Memory Region: Not Supported 00:11:35.676 Optional Asynchronous Events Supported 00:11:35.676 Namespace Attribute Notices: Supported 00:11:35.676 Firmware Activation Notices: Not Supported 00:11:35.676 ANA Change Notices: Not Supported 00:11:35.676 PLE Aggregate Log Change Notices: Not Supported 00:11:35.676 LBA Status Info Alert Notices: Not Supported 00:11:35.676 EGE Aggregate Log Change Notices: Not Supported 00:11:35.676 Normal NVM Subsystem Shutdown event: Not Supported 00:11:35.676 Zone Descriptor Change Notices: Not Supported 00:11:35.676 Discovery Log Change Notices: Not Supported 00:11:35.676 Controller Attributes 00:11:35.676 128-bit Host Identifier: Not Supported 00:11:35.676 Non-Operational Permissive Mode: Not Supported 00:11:35.676 NVM Sets: Not Supported 00:11:35.676 Read Recovery Levels: Not Supported 00:11:35.676 Endurance Groups: Not Supported 00:11:35.676 Predictable Latency Mode: Not Supported 00:11:35.676 Traffic Based Keep ALive: Not Supported 00:11:35.676 Namespace Granularity: Not Supported 00:11:35.676 SQ Associations: Not Supported 00:11:35.676 UUID List: Not Supported 00:11:35.676 Multi-Domain Subsystem: Not Supported 00:11:35.676 Fixed Capacity Management: Not Supported 00:11:35.676 Variable Capacity Management: Not Supported 00:11:35.676 Delete Endurance Group: Not Supported 00:11:35.676 Delete NVM Set: Not Supported 00:11:35.676 Extended LBA Formats Supported: Supported 00:11:35.676 Flexible Data Placement Supported: Not Supported 00:11:35.676 00:11:35.676 Controller Memory Buffer Support 00:11:35.676 ================================ 00:11:35.676 Supported: No 00:11:35.676 00:11:35.676 Persistent Memory Region Support 00:11:35.676 ================================ 00:11:35.676 Supported: No 00:11:35.676 00:11:35.676 Admin Command Set Attributes 00:11:35.676 ============================ 00:11:35.676 Security Send/Receive: Not Supported 00:11:35.677 Format NVM: Supported 00:11:35.677 Firmware Activate/Download: Not Supported 00:11:35.677 Namespace Management: Supported 00:11:35.677 Device Self-Test: Not Supported 00:11:35.677 Directives: Supported 00:11:35.677 NVMe-MI: Not Supported 00:11:35.677 Virtualization Management: Not Supported 00:11:35.677 Doorbell Buffer Config: Supported 00:11:35.677 Get LBA Status Capability: Not Supported 00:11:35.677 Command & Feature Lockdown Capability: Not Supported 00:11:35.677 Abort Command Limit: 4 00:11:35.677 Async Event Request Limit: 4 00:11:35.677 Number of Firmware Slots: N/A 00:11:35.677 Firmware Slot 1 Read-Only: N/A 00:11:35.677 Firmware Activation Without Reset: N/A 00:11:35.677 Multiple Update Detection Support: N/A 00:11:35.677 Firmware Update Granularity: No Information Provided 00:11:35.677 Per-Namespace SMART Log: Yes 00:11:35.677 Asymmetric Namespace Access Log Page: Not Supported 00:11:35.677 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:11:35.677 Command Effects Log Page: Supported 00:11:35.677 Get Log Page Extended Data: Supported 00:11:35.677 Telemetry Log Pages: Not Supported 00:11:35.677 Persistent Event Log Pages: Not Supported 00:11:35.677 Supported Log Pages Log Page: May Support 00:11:35.677 Commands Supported & Effects Log Page: Not Supported 00:11:35.677 Feature Identifiers & Effects Log Page:May Support 00:11:35.677 NVMe-MI Commands & Effects Log Page: May Support 00:11:35.677 Data Area 4 for Telemetry Log: Not Supported 00:11:35.677 Error Log Page Entries Supported: 1 00:11:35.677 Keep Alive: Not Supported 00:11:35.677 00:11:35.677 NVM Command Set Attributes 00:11:35.677 ========================== 00:11:35.677 Submission Queue Entry Size 00:11:35.677 Max: 64 00:11:35.677 Min: 64 00:11:35.677 Completion Queue Entry Size 00:11:35.677 Max: 16 00:11:35.677 Min: 16 00:11:35.677 Number of Namespaces: 256 00:11:35.677 Compare Command: Supported 00:11:35.677 Write Uncorrectable Command: Not Supported 00:11:35.677 Dataset Management Command: Supported 00:11:35.677 Write Zeroes Command: Supported 00:11:35.677 Set Features Save Field: Supported 00:11:35.677 Reservations: Not Supported 00:11:35.677 Timestamp: Supported 00:11:35.677 Copy: Supported 00:11:35.677 Volatile Write Cache: Present 00:11:35.677 Atomic Write Unit (Normal): 1 00:11:35.677 Atomic Write Unit (PFail): 1 00:11:35.677 Atomic Compare & Write Unit: 1 00:11:35.677 Fused Compare & Write: Not Supported 00:11:35.677 Scatter-Gather List 00:11:35.677 SGL Command Set: Supported 00:11:35.677 SGL Keyed: Not Supported 00:11:35.677 SGL Bit Bucket Descriptor: Not Supported 00:11:35.677 SGL Metadata Pointer: Not Supported 00:11:35.677 Oversized SGL: Not Supported 00:11:35.677 SGL Metadata Address: Not Supported 00:11:35.677 SGL Offset: Not Supported 00:11:35.677 Transport SGL Data Block: Not Supported 00:11:35.677 Replay Protected Memory Block: Not Supported 00:11:35.677 00:11:35.677 Firmware Slot Information 00:11:35.677 ========================= 00:11:35.677 Active slot: 1 00:11:35.677 Slot 1 Firmware Revision: 1.0 00:11:35.677 00:11:35.677 00:11:35.677 Commands Supported and Effects 00:11:35.677 ============================== 00:11:35.677 Admin Commands 00:11:35.677 -------------- 00:11:35.677 Delete I/O Submission Queue (00h): Supported 00:11:35.677 Create I/O Submission Queue (01h): Supported 00:11:35.677 Get Log Page (02h): Supported 00:11:35.677 Delete I/O Completion Queue (04h): Supported 00:11:35.677 Create I/O Completion Queue (05h): Supported 00:11:35.677 Identify (06h): Supported 00:11:35.677 Abort (08h): Supported 00:11:35.677 Set Features (09h): Supported 00:11:35.677 Get Features (0Ah): Supported 00:11:35.677 Asynchronous Event Request (0Ch): Supported 00:11:35.677 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:35.677 Directive Send (19h): Supported 00:11:35.677 Directive Receive (1Ah): Supported 00:11:35.677 Virtualization Management (1Ch): Supported 00:11:35.677 Doorbell Buffer Config (7Ch): Supported 00:11:35.677 Format NVM (80h): Supported LBA-Change 00:11:35.677 I/O Commands 00:11:35.677 ------------ 00:11:35.677 Flush (00h): Supported LBA-Change 00:11:35.677 Write (01h): Supported LBA-Change 00:11:35.677 Read (02h): Supported 00:11:35.677 Compare (05h): Supported 00:11:35.677 Write Zeroes (08h): Supported LBA-Change 00:11:35.677 Dataset Management (09h): Supported LBA-Change 00:11:35.677 Unknown (0Ch): Supported 00:11:35.677 Unknown (12h): Supported 00:11:35.677 Copy (19h): Supported LBA-Change 00:11:35.677 Unknown (1Dh): Supported LBA-Change 00:11:35.677 00:11:35.677 Error Log 00:11:35.677 ========= 00:11:35.677 00:11:35.677 Arbitration 00:11:35.677 =========== 00:11:35.677 Arbitration Burst: no limit 00:11:35.677 00:11:35.677 Power Management 00:11:35.677 ================ 00:11:35.677 Number of Power States: 1 00:11:35.677 Current Power State: Power State #0 00:11:35.677 Power State #0: 00:11:35.677 Max Power: 25.00 W 00:11:35.677 Non-Operational State: Operational 00:11:35.677 Entry Latency: 16 microseconds 00:11:35.677 Exit Latency: 4 microseconds 00:11:35.677 Relative Read Throughput: 0 00:11:35.677 Relative Read Latency: 0 00:11:35.677 Relative Write Throughput: 0 00:11:35.677 Relative Write Latency: 0 00:11:35.677 Idle Power: Not Reported 00:11:35.677 Active Power: Not Reported 00:11:35.677 Non-Operational Permissive Mode: Not Supported 00:11:35.677 00:11:35.677 Health Information 00:11:35.677 ================== 00:11:35.677 Critical Warnings: 00:11:35.677 Available Spare Space: OK 00:11:35.677 Temperature: [2024-12-06 06:37:48.301908] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 63100 terminated unexpected 00:11:35.677 OK 00:11:35.677 Device Reliability: OK 00:11:35.677 Read Only: No 00:11:35.677 Volatile Memory Backup: OK 00:11:35.677 Current Temperature: 323 Kelvin (50 Celsius) 00:11:35.677 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:35.677 Available Spare: 0% 00:11:35.677 Available Spare Threshold: 0% 00:11:35.677 Life Percentage Used: 0% 00:11:35.677 Data Units Read: 1017 00:11:35.677 Data Units Written: 887 00:11:35.677 Host Read Commands: 52933 00:11:35.677 Host Write Commands: 51820 00:11:35.677 Controller Busy Time: 0 minutes 00:11:35.677 Power Cycles: 0 00:11:35.677 Power On Hours: 0 hours 00:11:35.677 Unsafe Shutdowns: 0 00:11:35.677 Unrecoverable Media Errors: 0 00:11:35.677 Lifetime Error Log Entries: 0 00:11:35.677 Warning Temperature Time: 0 minutes 00:11:35.677 Critical Temperature Time: 0 minutes 00:11:35.677 00:11:35.677 Number of Queues 00:11:35.677 ================ 00:11:35.677 Number of I/O Submission Queues: 64 00:11:35.677 Number of I/O Completion Queues: 64 00:11:35.677 00:11:35.677 ZNS Specific Controller Data 00:11:35.677 ============================ 00:11:35.677 Zone Append Size Limit: 0 00:11:35.677 00:11:35.677 00:11:35.677 Active Namespaces 00:11:35.677 ================= 00:11:35.677 Namespace ID:1 00:11:35.677 Error Recovery Timeout: Unlimited 00:11:35.677 Command Set Identifier: NVM (00h) 00:11:35.677 Deallocate: Supported 00:11:35.677 Deallocated/Unwritten Error: Supported 00:11:35.677 Deallocated Read Value: All 0x00 00:11:35.677 Deallocate in Write Zeroes: Not Supported 00:11:35.677 Deallocated Guard Field: 0xFFFF 00:11:35.677 Flush: Supported 00:11:35.677 Reservation: Not Supported 00:11:35.677 Namespace Sharing Capabilities: Private 00:11:35.677 Size (in LBAs): 1310720 (5GiB) 00:11:35.677 Capacity (in LBAs): 1310720 (5GiB) 00:11:35.677 Utilization (in LBAs): 1310720 (5GiB) 00:11:35.677 Thin Provisioning: Not Supported 00:11:35.677 Per-NS Atomic Units: No 00:11:35.677 Maximum Single Source Range Length: 128 00:11:35.677 Maximum Copy Length: 128 00:11:35.677 Maximum Source Range Count: 128 00:11:35.677 NGUID/EUI64 Never Reused: No 00:11:35.677 Namespace Write Protected: No 00:11:35.677 Number of LBA Formats: 8 00:11:35.677 Current LBA Format: LBA Format #04 00:11:35.677 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:35.677 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:35.677 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:35.677 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:35.677 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:35.677 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:35.677 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:35.677 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:35.678 00:11:35.678 NVM Specific Namespace Data 00:11:35.678 =========================== 00:11:35.678 Logical Block Storage Tag Mask: 0 00:11:35.678 Protection Information Capabilities: 00:11:35.678 16b Guard Protection Information Storage Tag Support: No 00:11:35.678 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:35.678 Storage Tag Check Read Support: No 00:11:35.678 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:35.678 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:35.678 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:35.678 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:35.678 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:35.678 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:35.678 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:35.678 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:35.678 ===================================================== 00:11:35.678 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:35.678 ===================================================== 00:11:35.678 Controller Capabilities/Features 00:11:35.678 ================================ 00:11:35.678 Vendor ID: 1b36 00:11:35.678 Subsystem Vendor ID: 1af4 00:11:35.678 Serial Number: 12343 00:11:35.678 Model Number: QEMU NVMe Ctrl 00:11:35.678 Firmware Version: 8.0.0 00:11:35.678 Recommended Arb Burst: 6 00:11:35.678 IEEE OUI Identifier: 00 54 52 00:11:35.678 Multi-path I/O 00:11:35.678 May have multiple subsystem ports: No 00:11:35.678 May have multiple controllers: Yes 00:11:35.678 Associated with SR-IOV VF: No 00:11:35.678 Max Data Transfer Size: 524288 00:11:35.678 Max Number of Namespaces: 256 00:11:35.678 Max Number of I/O Queues: 64 00:11:35.678 NVMe Specification Version (VS): 1.4 00:11:35.678 NVMe Specification Version (Identify): 1.4 00:11:35.678 Maximum Queue Entries: 2048 00:11:35.678 Contiguous Queues Required: Yes 00:11:35.678 Arbitration Mechanisms Supported 00:11:35.678 Weighted Round Robin: Not Supported 00:11:35.678 Vendor Specific: Not Supported 00:11:35.678 Reset Timeout: 7500 ms 00:11:35.678 Doorbell Stride: 4 bytes 00:11:35.678 NVM Subsystem Reset: Not Supported 00:11:35.678 Command Sets Supported 00:11:35.678 NVM Command Set: Supported 00:11:35.678 Boot Partition: Not Supported 00:11:35.678 Memory Page Size Minimum: 4096 bytes 00:11:35.678 Memory Page Size Maximum: 65536 bytes 00:11:35.678 Persistent Memory Region: Not Supported 00:11:35.678 Optional Asynchronous Events Supported 00:11:35.678 Namespace Attribute Notices: Supported 00:11:35.678 Firmware Activation Notices: Not Supported 00:11:35.678 ANA Change Notices: Not Supported 00:11:35.678 PLE Aggregate Log Change Notices: Not Supported 00:11:35.678 LBA Status Info Alert Notices: Not Supported 00:11:35.678 EGE Aggregate Log Change Notices: Not Supported 00:11:35.678 Normal NVM Subsystem Shutdown event: Not Supported 00:11:35.678 Zone Descriptor Change Notices: Not Supported 00:11:35.678 Discovery Log Change Notices: Not Supported 00:11:35.678 Controller Attributes 00:11:35.678 128-bit Host Identifier: Not Supported 00:11:35.678 Non-Operational Permissive Mode: Not Supported 00:11:35.678 NVM Sets: Not Supported 00:11:35.678 Read Recovery Levels: Not Supported 00:11:35.678 Endurance Groups: Supported 00:11:35.678 Predictable Latency Mode: Not Supported 00:11:35.678 Traffic Based Keep ALive: Not Supported 00:11:35.678 Namespace Granularity: Not Supported 00:11:35.678 SQ Associations: Not Supported 00:11:35.678 UUID List: Not Supported 00:11:35.678 Multi-Domain Subsystem: Not Supported 00:11:35.678 Fixed Capacity Management: Not Supported 00:11:35.678 Variable Capacity Management: Not Supported 00:11:35.678 Delete Endurance Group: Not Supported 00:11:35.678 Delete NVM Set: Not Supported 00:11:35.678 Extended LBA Formats Supported: Supported 00:11:35.678 Flexible Data Placement Supported: Supported 00:11:35.678 00:11:35.678 Controller Memory Buffer Support 00:11:35.678 ================================ 00:11:35.678 Supported: No 00:11:35.678 00:11:35.678 Persistent Memory Region Support 00:11:35.678 ================================ 00:11:35.678 Supported: No 00:11:35.678 00:11:35.678 Admin Command Set Attributes 00:11:35.678 ============================ 00:11:35.678 Security Send/Receive: Not Supported 00:11:35.678 Format NVM: Supported 00:11:35.678 Firmware Activate/Download: Not Supported 00:11:35.678 Namespace Management: Supported 00:11:35.678 Device Self-Test: Not Supported 00:11:35.678 Directives: Supported 00:11:35.678 NVMe-MI: Not Supported 00:11:35.678 Virtualization Management: Not Supported 00:11:35.678 Doorbell Buffer Config: Supported 00:11:35.678 Get LBA Status Capability: Not Supported 00:11:35.678 Command & Feature Lockdown Capability: Not Supported 00:11:35.678 Abort Command Limit: 4 00:11:35.678 Async Event Request Limit: 4 00:11:35.678 Number of Firmware Slots: N/A 00:11:35.678 Firmware Slot 1 Read-Only: N/A 00:11:35.678 Firmware Activation Without Reset: N/A 00:11:35.678 Multiple Update Detection Support: N/A 00:11:35.678 Firmware Update Granularity: No Information Provided 00:11:35.678 Per-Namespace SMART Log: Yes 00:11:35.678 Asymmetric Namespace Access Log Page: Not Supported 00:11:35.678 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:11:35.678 Command Effects Log Page: Supported 00:11:35.678 Get Log Page Extended Data: Supported 00:11:35.678 Telemetry Log Pages: Not Supported 00:11:35.678 Persistent Event Log Pages: Not Supported 00:11:35.678 Supported Log Pages Log Page: May Support 00:11:35.678 Commands Supported & Effects Log Page: Not Supported 00:11:35.678 Feature Identifiers & Effects Log Page:May Support 00:11:35.678 NVMe-MI Commands & Effects Log Page: May Support 00:11:35.678 Data Area 4 for Telemetry Log: Not Supported 00:11:35.678 Error Log Page Entries Supported: 1 00:11:35.678 Keep Alive: Not Supported 00:11:35.678 00:11:35.678 NVM Command Set Attributes 00:11:35.678 ========================== 00:11:35.678 Submission Queue Entry Size 00:11:35.678 Max: 64 00:11:35.678 Min: 64 00:11:35.678 Completion Queue Entry Size 00:11:35.678 Max: 16 00:11:35.678 Min: 16 00:11:35.678 Number of Namespaces: 256 00:11:35.678 Compare Command: Supported 00:11:35.678 Write Uncorrectable Command: Not Supported 00:11:35.678 Dataset Management Command: Supported 00:11:35.678 Write Zeroes Command: Supported 00:11:35.678 Set Features Save Field: Supported 00:11:35.678 Reservations: Not Supported 00:11:35.678 Timestamp: Supported 00:11:35.678 Copy: Supported 00:11:35.678 Volatile Write Cache: Present 00:11:35.678 Atomic Write Unit (Normal): 1 00:11:35.678 Atomic Write Unit (PFail): 1 00:11:35.678 Atomic Compare & Write Unit: 1 00:11:35.678 Fused Compare & Write: Not Supported 00:11:35.678 Scatter-Gather List 00:11:35.678 SGL Command Set: Supported 00:11:35.678 SGL Keyed: Not Supported 00:11:35.678 SGL Bit Bucket Descriptor: Not Supported 00:11:35.678 SGL Metadata Pointer: Not Supported 00:11:35.678 Oversized SGL: Not Supported 00:11:35.678 SGL Metadata Address: Not Supported 00:11:35.678 SGL Offset: Not Supported 00:11:35.678 Transport SGL Data Block: Not Supported 00:11:35.678 Replay Protected Memory Block: Not Supported 00:11:35.678 00:11:35.678 Firmware Slot Information 00:11:35.678 ========================= 00:11:35.678 Active slot: 1 00:11:35.678 Slot 1 Firmware Revision: 1.0 00:11:35.678 00:11:35.678 00:11:35.678 Commands Supported and Effects 00:11:35.678 ============================== 00:11:35.678 Admin Commands 00:11:35.678 -------------- 00:11:35.678 Delete I/O Submission Queue (00h): Supported 00:11:35.678 Create I/O Submission Queue (01h): Supported 00:11:35.678 Get Log Page (02h): Supported 00:11:35.678 Delete I/O Completion Queue (04h): Supported 00:11:35.678 Create I/O Completion Queue (05h): Supported 00:11:35.678 Identify (06h): Supported 00:11:35.678 Abort (08h): Supported 00:11:35.678 Set Features (09h): Supported 00:11:35.678 Get Features (0Ah): Supported 00:11:35.678 Asynchronous Event Request (0Ch): Supported 00:11:35.678 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:35.678 Directive Send (19h): Supported 00:11:35.678 Directive Receive (1Ah): Supported 00:11:35.678 Virtualization Management (1Ch): Supported 00:11:35.678 Doorbell Buffer Config (7Ch): Supported 00:11:35.678 Format NVM (80h): Supported LBA-Change 00:11:35.678 I/O Commands 00:11:35.678 ------------ 00:11:35.678 Flush (00h): Supported LBA-Change 00:11:35.678 Write (01h): Supported LBA-Change 00:11:35.678 Read (02h): Supported 00:11:35.678 Compare (05h): Supported 00:11:35.678 Write Zeroes (08h): Supported LBA-Change 00:11:35.678 Dataset Management (09h): Supported LBA-Change 00:11:35.678 Unknown (0Ch): Supported 00:11:35.679 Unknown (12h): Supported 00:11:35.679 Copy (19h): Supported LBA-Change 00:11:35.679 Unknown (1Dh): Supported LBA-Change 00:11:35.679 00:11:35.679 Error Log 00:11:35.679 ========= 00:11:35.679 00:11:35.679 Arbitration 00:11:35.679 =========== 00:11:35.679 Arbitration Burst: no limit 00:11:35.679 00:11:35.679 Power Management 00:11:35.679 ================ 00:11:35.679 Number of Power States: 1 00:11:35.679 Current Power State: Power State #0 00:11:35.679 Power State #0: 00:11:35.679 Max Power: 25.00 W 00:11:35.679 Non-Operational State: Operational 00:11:35.679 Entry Latency: 16 microseconds 00:11:35.679 Exit Latency: 4 microseconds 00:11:35.679 Relative Read Throughput: 0 00:11:35.679 Relative Read Latency: 0 00:11:35.679 Relative Write Throughput: 0 00:11:35.679 Relative Write Latency: 0 00:11:35.679 Idle Power: Not Reported 00:11:35.679 Active Power: Not Reported 00:11:35.679 Non-Operational Permissive Mode: Not Supported 00:11:35.679 00:11:35.679 Health Information 00:11:35.679 ================== 00:11:35.679 Critical Warnings: 00:11:35.679 Available Spare Space: OK 00:11:35.679 Temperature: OK 00:11:35.679 Device Reliability: OK 00:11:35.679 Read Only: No 00:11:35.679 Volatile Memory Backup: OK 00:11:35.679 Current Temperature: 323 Kelvin (50 Celsius) 00:11:35.679 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:35.679 Available Spare: 0% 00:11:35.679 Available Spare Threshold: 0% 00:11:35.679 Life Percentage Used: 0% 00:11:35.679 Data Units Read: 776 00:11:35.679 Data Units Written: 705 00:11:35.679 Host Read Commands: 35602 00:11:35.679 Host Write Commands: 35024 00:11:35.679 Controller Busy Time: 0 minutes 00:11:35.679 Power Cycles: 0 00:11:35.679 Power On Hours: 0 hours 00:11:35.679 Unsafe Shutdowns: 0 00:11:35.679 Unrecoverable Media Errors: 0 00:11:35.679 Lifetime Error Log Entries: 0 00:11:35.679 Warning Temperature Time: 0 minutes 00:11:35.679 Critical Temperature Time: 0 minutes 00:11:35.679 00:11:35.679 Number of Queues 00:11:35.679 ================ 00:11:35.679 Number of I/O Submission Queues: 64 00:11:35.679 Number of I/O Completion Queues: 64 00:11:35.679 00:11:35.679 ZNS Specific Controller Data 00:11:35.679 ============================ 00:11:35.679 Zone Append Size Limit: 0 00:11:35.679 00:11:35.679 00:11:35.679 Active Namespaces 00:11:35.679 ================= 00:11:35.679 Namespace ID:1 00:11:35.679 Error Recovery Timeout: Unlimited 00:11:35.679 Command Set Identifier: NVM (00h) 00:11:35.679 Deallocate: Supported 00:11:35.679 Deallocated/Unwritten Error: Supported 00:11:35.679 Deallocated Read Value: All 0x00 00:11:35.679 Deallocate in Write Zeroes: Not Supported 00:11:35.679 Deallocated Guard Field: 0xFFFF 00:11:35.679 Flush: Supported 00:11:35.679 Reservation: Not Supported 00:11:35.679 Namespace Sharing Capabilities: Multiple Controllers 00:11:35.679 Size (in LBAs): 262144 (1GiB) 00:11:35.679 Capacity (in LBAs): 262144 (1GiB) 00:11:35.679 Utilization (in LBAs): 262144 (1GiB) 00:11:35.679 Thin Provisioning: Not Supported 00:11:35.679 Per-NS Atomic Units: No 00:11:35.679 Maximum Single Source Range Length: 128 00:11:35.679 Maximum Copy Length: 128 00:11:35.679 Maximum Source Range Count: 128 00:11:35.679 NGUID/EUI64 Never Reused: No 00:11:35.679 Namespace Write Protected: No 00:11:35.679 Endurance group ID: 1 00:11:35.679 Number of LBA Formats: 8 00:11:35.679 Current LBA Format: LBA Format #04 00:11:35.679 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:35.679 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:35.679 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:35.679 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:35.679 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:35.679 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:35.679 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:35.679 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:35.679 00:11:35.679 Get Feature FDP: 00:11:35.679 ================ 00:11:35.679 Enabled: Yes 00:11:35.679 FDP configuration index: 0 00:11:35.679 00:11:35.679 FDP configurations log page 00:11:35.679 =========================== 00:11:35.679 Number of FDP configurations: 1 00:11:35.679 Version: 0 00:11:35.679 Size: 112 00:11:35.679 FDP Configuration Descriptor: 0 00:11:35.679 Descriptor Size: 96 00:11:35.679 Reclaim Group Identifier format: 2 00:11:35.679 FDP Volatile Write Cache: Not Present 00:11:35.679 FDP Configuration: Valid 00:11:35.679 Vendor Specific Size: 0 00:11:35.679 Number of Reclaim Groups: 2 00:11:35.679 Number of Recalim Unit Handles: 8 00:11:35.679 Max Placement Identifiers: 128 00:11:35.679 Number of Namespaces Suppprted: 256 00:11:35.679 Reclaim unit Nominal Size: 6000000 bytes 00:11:35.679 Estimated Reclaim Unit Time Limit: Not Reported 00:11:35.679 RUH Desc #000: RUH Type: Initially Isolated 00:11:35.679 RUH Desc #001: RUH Type: Initially Isolated 00:11:35.679 RUH Desc #002: RUH Type: Initially Isolated 00:11:35.679 RUH Desc #003: RUH Type: Initially Isolated 00:11:35.679 RUH Desc #004: RUH Type: Initially Isolated 00:11:35.679 RUH Desc #005: RUH Type: Initially Isolated 00:11:35.679 RUH Desc #006: RUH Type: Initially Isolated 00:11:35.679 RUH Desc #007: RUH Type: Initially Isolated 00:11:35.679 00:11:35.679 FDP reclaim unit handle usage log page 00:11:35.679 ====================================== 00:11:35.679 Number of Reclaim Unit Handles: 8 00:11:35.679 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:35.679 RUH Usage Desc #001: RUH Attributes: Unused 00:11:35.679 RUH Usage Desc #002: RUH Attributes: Unused 00:11:35.679 RUH Usage Desc #003: RUH Attributes: Unused 00:11:35.679 RUH Usage Desc #004: RUH Attributes: Unused 00:11:35.679 RUH Usage Desc #005: RUH Attributes: Unused 00:11:35.679 RUH Usage Desc #006: RUH Attributes: Unused 00:11:35.679 RUH Usage Desc #007: RUH Attributes: Unused 00:11:35.679 00:11:35.679 FDP statistics log page 00:11:35.679 ======================= 00:11:35.679 Host bytes with metadata written: 425304064 00:11:35.679 Medi[2024-12-06 06:37:48.303248] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 63100 terminated unexpected 00:11:35.679 a bytes with metadata written: 425349120 00:11:35.679 Media bytes erased: 0 00:11:35.679 00:11:35.679 FDP events log page 00:11:35.679 =================== 00:11:35.679 Number of FDP events: 0 00:11:35.679 00:11:35.679 NVM Specific Namespace Data 00:11:35.679 =========================== 00:11:35.679 Logical Block Storage Tag Mask: 0 00:11:35.679 Protection Information Capabilities: 00:11:35.679 16b Guard Protection Information Storage Tag Support: No 00:11:35.679 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:35.679 Storage Tag Check Read Support: No 00:11:35.679 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:35.679 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:35.679 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:35.679 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:35.679 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:35.679 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:35.679 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:35.679 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:35.679 ===================================================== 00:11:35.679 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:35.679 ===================================================== 00:11:35.679 Controller Capabilities/Features 00:11:35.679 ================================ 00:11:35.679 Vendor ID: 1b36 00:11:35.679 Subsystem Vendor ID: 1af4 00:11:35.679 Serial Number: 12342 00:11:35.679 Model Number: QEMU NVMe Ctrl 00:11:35.679 Firmware Version: 8.0.0 00:11:35.679 Recommended Arb Burst: 6 00:11:35.679 IEEE OUI Identifier: 00 54 52 00:11:35.679 Multi-path I/O 00:11:35.679 May have multiple subsystem ports: No 00:11:35.679 May have multiple controllers: No 00:11:35.679 Associated with SR-IOV VF: No 00:11:35.679 Max Data Transfer Size: 524288 00:11:35.679 Max Number of Namespaces: 256 00:11:35.679 Max Number of I/O Queues: 64 00:11:35.679 NVMe Specification Version (VS): 1.4 00:11:35.679 NVMe Specification Version (Identify): 1.4 00:11:35.679 Maximum Queue Entries: 2048 00:11:35.679 Contiguous Queues Required: Yes 00:11:35.679 Arbitration Mechanisms Supported 00:11:35.679 Weighted Round Robin: Not Supported 00:11:35.679 Vendor Specific: Not Supported 00:11:35.679 Reset Timeout: 7500 ms 00:11:35.680 Doorbell Stride: 4 bytes 00:11:35.680 NVM Subsystem Reset: Not Supported 00:11:35.680 Command Sets Supported 00:11:35.680 NVM Command Set: Supported 00:11:35.680 Boot Partition: Not Supported 00:11:35.680 Memory Page Size Minimum: 4096 bytes 00:11:35.680 Memory Page Size Maximum: 65536 bytes 00:11:35.680 Persistent Memory Region: Not Supported 00:11:35.680 Optional Asynchronous Events Supported 00:11:35.680 Namespace Attribute Notices: Supported 00:11:35.680 Firmware Activation Notices: Not Supported 00:11:35.680 ANA Change Notices: Not Supported 00:11:35.680 PLE Aggregate Log Change Notices: Not Supported 00:11:35.680 LBA Status Info Alert Notices: Not Supported 00:11:35.680 EGE Aggregate Log Change Notices: Not Supported 00:11:35.680 Normal NVM Subsystem Shutdown event: Not Supported 00:11:35.680 Zone Descriptor Change Notices: Not Supported 00:11:35.680 Discovery Log Change Notices: Not Supported 00:11:35.680 Controller Attributes 00:11:35.680 128-bit Host Identifier: Not Supported 00:11:35.680 Non-Operational Permissive Mode: Not Supported 00:11:35.680 NVM Sets: Not Supported 00:11:35.680 Read Recovery Levels: Not Supported 00:11:35.680 Endurance Groups: Not Supported 00:11:35.680 Predictable Latency Mode: Not Supported 00:11:35.680 Traffic Based Keep ALive: Not Supported 00:11:35.680 Namespace Granularity: Not Supported 00:11:35.680 SQ Associations: Not Supported 00:11:35.680 UUID List: Not Supported 00:11:35.680 Multi-Domain Subsystem: Not Supported 00:11:35.680 Fixed Capacity Management: Not Supported 00:11:35.680 Variable Capacity Management: Not Supported 00:11:35.680 Delete Endurance Group: Not Supported 00:11:35.680 Delete NVM Set: Not Supported 00:11:35.680 Extended LBA Formats Supported: Supported 00:11:35.680 Flexible Data Placement Supported: Not Supported 00:11:35.680 00:11:35.680 Controller Memory Buffer Support 00:11:35.680 ================================ 00:11:35.680 Supported: No 00:11:35.680 00:11:35.680 Persistent Memory Region Support 00:11:35.680 ================================ 00:11:35.680 Supported: No 00:11:35.680 00:11:35.680 Admin Command Set Attributes 00:11:35.680 ============================ 00:11:35.680 Security Send/Receive: Not Supported 00:11:35.680 Format NVM: Supported 00:11:35.680 Firmware Activate/Download: Not Supported 00:11:35.680 Namespace Management: Supported 00:11:35.680 Device Self-Test: Not Supported 00:11:35.680 Directives: Supported 00:11:35.680 NVMe-MI: Not Supported 00:11:35.680 Virtualization Management: Not Supported 00:11:35.680 Doorbell Buffer Config: Supported 00:11:35.680 Get LBA Status Capability: Not Supported 00:11:35.680 Command & Feature Lockdown Capability: Not Supported 00:11:35.680 Abort Command Limit: 4 00:11:35.680 Async Event Request Limit: 4 00:11:35.680 Number of Firmware Slots: N/A 00:11:35.680 Firmware Slot 1 Read-Only: N/A 00:11:35.680 Firmware Activation Without Reset: N/A 00:11:35.680 Multiple Update Detection Support: N/A 00:11:35.680 Firmware Update Granularity: No Information Provided 00:11:35.680 Per-Namespace SMART Log: Yes 00:11:35.680 Asymmetric Namespace Access Log Page: Not Supported 00:11:35.680 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:11:35.680 Command Effects Log Page: Supported 00:11:35.680 Get Log Page Extended Data: Supported 00:11:35.680 Telemetry Log Pages: Not Supported 00:11:35.680 Persistent Event Log Pages: Not Supported 00:11:35.680 Supported Log Pages Log Page: May Support 00:11:35.680 Commands Supported & Effects Log Page: Not Supported 00:11:35.680 Feature Identifiers & Effects Log Page:May Support 00:11:35.680 NVMe-MI Commands & Effects Log Page: May Support 00:11:35.680 Data Area 4 for Telemetry Log: Not Supported 00:11:35.680 Error Log Page Entries Supported: 1 00:11:35.680 Keep Alive: Not Supported 00:11:35.680 00:11:35.680 NVM Command Set Attributes 00:11:35.680 ========================== 00:11:35.680 Submission Queue Entry Size 00:11:35.680 Max: 64 00:11:35.680 Min: 64 00:11:35.680 Completion Queue Entry Size 00:11:35.680 Max: 16 00:11:35.680 Min: 16 00:11:35.680 Number of Namespaces: 256 00:11:35.680 Compare Command: Supported 00:11:35.680 Write Uncorrectable Command: Not Supported 00:11:35.680 Dataset Management Command: Supported 00:11:35.680 Write Zeroes Command: Supported 00:11:35.680 Set Features Save Field: Supported 00:11:35.680 Reservations: Not Supported 00:11:35.680 Timestamp: Supported 00:11:35.680 Copy: Supported 00:11:35.680 Volatile Write Cache: Present 00:11:35.680 Atomic Write Unit (Normal): 1 00:11:35.680 Atomic Write Unit (PFail): 1 00:11:35.680 Atomic Compare & Write Unit: 1 00:11:35.680 Fused Compare & Write: Not Supported 00:11:35.680 Scatter-Gather List 00:11:35.680 SGL Command Set: Supported 00:11:35.680 SGL Keyed: Not Supported 00:11:35.680 SGL Bit Bucket Descriptor: Not Supported 00:11:35.680 SGL Metadata Pointer: Not Supported 00:11:35.680 Oversized SGL: Not Supported 00:11:35.680 SGL Metadata Address: Not Supported 00:11:35.680 SGL Offset: Not Supported 00:11:35.680 Transport SGL Data Block: Not Supported 00:11:35.680 Replay Protected Memory Block: Not Supported 00:11:35.680 00:11:35.680 Firmware Slot Information 00:11:35.680 ========================= 00:11:35.680 Active slot: 1 00:11:35.680 Slot 1 Firmware Revision: 1.0 00:11:35.680 00:11:35.681 00:11:35.681 Commands Supported and Effects 00:11:35.681 ============================== 00:11:35.681 Admin Commands 00:11:35.681 -------------- 00:11:35.681 Delete I/O Submission Queue (00h): Supported 00:11:35.681 Create I/O Submission Queue (01h): Supported 00:11:35.681 Get Log Page (02h): Supported 00:11:35.681 Delete I/O Completion Queue (04h): Supported 00:11:35.681 Create I/O Completion Queue (05h): Supported 00:11:35.681 Identify (06h): Supported 00:11:35.681 Abort (08h): Supported 00:11:35.681 Set Features (09h): Supported 00:11:35.681 Get Features (0Ah): Supported 00:11:35.681 Asynchronous Event Request (0Ch): Supported 00:11:35.681 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:35.681 Directive Send (19h): Supported 00:11:35.681 Directive Receive (1Ah): Supported 00:11:35.681 Virtualization Management (1Ch): Supported 00:11:35.681 Doorbell Buffer Config (7Ch): Supported 00:11:35.681 Format NVM (80h): Supported LBA-Change 00:11:35.681 I/O Commands 00:11:35.681 ------------ 00:11:35.681 Flush (00h): Supported LBA-Change 00:11:35.681 Write (01h): Supported LBA-Change 00:11:35.681 Read (02h): Supported 00:11:35.681 Compare (05h): Supported 00:11:35.681 Write Zeroes (08h): Supported LBA-Change 00:11:35.681 Dataset Management (09h): Supported LBA-Change 00:11:35.681 Unknown (0Ch): Supported 00:11:35.681 Unknown (12h): Supported 00:11:35.681 Copy (19h): Supported LBA-Change 00:11:35.681 Unknown (1Dh): Supported LBA-Change 00:11:35.681 00:11:35.681 Error Log 00:11:35.681 ========= 00:11:35.681 00:11:35.681 Arbitration 00:11:35.681 =========== 00:11:35.681 Arbitration Burst: no limit 00:11:35.681 00:11:35.681 Power Management 00:11:35.681 ================ 00:11:35.681 Number of Power States: 1 00:11:35.681 Current Power State: Power State #0 00:11:35.681 Power State #0: 00:11:35.681 Max Power: 25.00 W 00:11:35.681 Non-Operational State: Operational 00:11:35.681 Entry Latency: 16 microseconds 00:11:35.681 Exit Latency: 4 microseconds 00:11:35.681 Relative Read Throughput: 0 00:11:35.681 Relative Read Latency: 0 00:11:35.681 Relative Write Throughput: 0 00:11:35.681 Relative Write Latency: 0 00:11:35.681 Idle Power: Not Reported 00:11:35.681 Active Power: Not Reported 00:11:35.681 Non-Operational Permissive Mode: Not Supported 00:11:35.681 00:11:35.681 Health Information 00:11:35.681 ================== 00:11:35.681 Critical Warnings: 00:11:35.681 Available Spare Space: OK 00:11:35.681 Temperature: OK 00:11:35.681 Device Reliability: OK 00:11:35.681 Read Only: No 00:11:35.681 Volatile Memory Backup: OK 00:11:35.681 Current Temperature: 323 Kelvin (50 Celsius) 00:11:35.681 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:35.681 Available Spare: 0% 00:11:35.681 Available Spare Threshold: 0% 00:11:35.681 Life Percentage Used: 0% 00:11:35.681 Data Units Read: 2070 00:11:35.681 Data Units Written: 1857 00:11:35.681 Host Read Commands: 104311 00:11:35.681 Host Write Commands: 102580 00:11:35.681 Controller Busy Time: 0 minutes 00:11:35.681 Power Cycles: 0 00:11:35.681 Power On Hours: 0 hours 00:11:35.681 Unsafe Shutdowns: 0 00:11:35.681 Unrecoverable Media Errors: 0 00:11:35.681 Lifetime Error Log Entries: 0 00:11:35.681 Warning Temperature Time: 0 minutes 00:11:35.681 Critical Temperature Time: 0 minutes 00:11:35.681 00:11:35.681 Number of Queues 00:11:35.681 ================ 00:11:35.681 Number of I/O Submission Queues: 64 00:11:35.681 Number of I/O Completion Queues: 64 00:11:35.681 00:11:35.681 ZNS Specific Controller Data 00:11:35.681 ============================ 00:11:35.681 Zone Append Size Limit: 0 00:11:35.681 00:11:35.681 00:11:35.681 Active Namespaces 00:11:35.681 ================= 00:11:35.681 Namespace ID:1 00:11:35.681 Error Recovery Timeout: Unlimited 00:11:35.681 Command Set Identifier: NVM (00h) 00:11:35.681 Deallocate: Supported 00:11:35.681 Deallocated/Unwritten Error: Supported 00:11:35.681 Deallocated Read Value: All 0x00 00:11:35.681 Deallocate in Write Zeroes: Not Supported 00:11:35.681 Deallocated Guard Field: 0xFFFF 00:11:35.681 Flush: Supported 00:11:35.681 Reservation: Not Supported 00:11:35.681 Namespace Sharing Capabilities: Private 00:11:35.681 Size (in LBAs): 1048576 (4GiB) 00:11:35.681 Capacity (in LBAs): 1048576 (4GiB) 00:11:35.681 Utilization (in LBAs): 1048576 (4GiB) 00:11:35.681 Thin Provisioning: Not Supported 00:11:35.681 Per-NS Atomic Units: No 00:11:35.681 Maximum Single Source Range Length: 128 00:11:35.681 Maximum Copy Length: 128 00:11:35.681 Maximum Source Range Count: 128 00:11:35.681 NGUID/EUI64 Never Reused: No 00:11:35.681 Namespace Write Protected: No 00:11:35.681 Number of LBA Formats: 8 00:11:35.681 Current LBA Format: LBA Format #04 00:11:35.681 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:35.681 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:35.681 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:35.681 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:35.681 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:35.681 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:35.681 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:35.681 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:35.681 00:11:35.681 NVM Specific Namespace Data 00:11:35.681 =========================== 00:11:35.681 Logical Block Storage Tag Mask: 0 00:11:35.681 Protection Information Capabilities: 00:11:35.681 16b Guard Protection Information Storage Tag Support: No 00:11:35.681 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:35.681 Storage Tag Check Read Support: No 00:11:35.681 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:35.681 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:35.681 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:35.681 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:35.681 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:35.681 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:35.681 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:35.681 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:35.681 Namespace ID:2 00:11:35.681 Error Recovery Timeout: Unlimited 00:11:35.681 Command Set Identifier: NVM (00h) 00:11:35.681 Deallocate: Supported 00:11:35.681 Deallocated/Unwritten Error: Supported 00:11:35.681 Deallocated Read Value: All 0x00 00:11:35.681 Deallocate in Write Zeroes: Not Supported 00:11:35.681 Deallocated Guard Field: 0xFFFF 00:11:35.681 Flush: Supported 00:11:35.681 Reservation: Not Supported 00:11:35.681 Namespace Sharing Capabilities: Private 00:11:35.681 Size (in LBAs): 1048576 (4GiB) 00:11:35.681 Capacity (in LBAs): 1048576 (4GiB) 00:11:35.681 Utilization (in LBAs): 1048576 (4GiB) 00:11:35.681 Thin Provisioning: Not Supported 00:11:35.681 Per-NS Atomic Units: No 00:11:35.681 Maximum Single Source Range Length: 128 00:11:35.681 Maximum Copy Length: 128 00:11:35.681 Maximum Source Range Count: 128 00:11:35.681 NGUID/EUI64 Never Reused: No 00:11:35.681 Namespace Write Protected: No 00:11:35.681 Number of LBA Formats: 8 00:11:35.681 Current LBA Format: LBA Format #04 00:11:35.681 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:35.681 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:35.681 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:35.681 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:35.681 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:35.681 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:35.681 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:35.681 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:35.681 00:11:35.681 NVM Specific Namespace Data 00:11:35.681 =========================== 00:11:35.681 Logical Block Storage Tag Mask: 0 00:11:35.681 Protection Information Capabilities: 00:11:35.681 16b Guard Protection Information Storage Tag Support: No 00:11:35.681 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:35.681 Storage Tag Check Read Support: No 00:11:35.682 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:35.682 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:35.682 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:35.682 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:35.682 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:35.682 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:35.682 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:35.682 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:35.682 Namespace ID:3 00:11:35.682 Error Recovery Timeout: Unlimited 00:11:35.682 Command Set Identifier: NVM (00h) 00:11:35.682 Deallocate: Supported 00:11:35.682 Deallocated/Unwritten Error: Supported 00:11:35.682 Deallocated Read Value: All 0x00 00:11:35.682 Deallocate in Write Zeroes: Not Supported 00:11:35.682 Deallocated Guard Field: 0xFFFF 00:11:35.682 Flush: Supported 00:11:35.682 Reservation: Not Supported 00:11:35.682 Namespace Sharing Capabilities: Private 00:11:35.682 Size (in LBAs): 1048576 (4GiB) 00:11:35.682 Capacity (in LBAs): 1048576 (4GiB) 00:11:35.682 Utilization (in LBAs): 1048576 (4GiB) 00:11:35.682 Thin Provisioning: Not Supported 00:11:35.682 Per-NS Atomic Units: No 00:11:35.682 Maximum Single Source Range Length: 128 00:11:35.682 Maximum Copy Length: 128 00:11:35.682 Maximum Source Range Count: 128 00:11:35.682 NGUID/EUI64 Never Reused: No 00:11:35.682 Namespace Write Protected: No 00:11:35.682 Number of LBA Formats: 8 00:11:35.682 Current LBA Format: LBA Format #04 00:11:35.682 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:35.682 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:35.682 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:35.682 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:35.682 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:35.682 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:35.682 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:35.682 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:35.682 00:11:35.682 NVM Specific Namespace Data 00:11:35.682 =========================== 00:11:35.682 Logical Block Storage Tag Mask: 0 00:11:35.682 Protection Information Capabilities: 00:11:35.682 16b Guard Protection Information Storage Tag Support: No 00:11:35.682 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:35.682 Storage Tag Check Read Support: No 00:11:35.682 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:35.682 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:35.682 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:35.682 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:35.682 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:35.682 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:35.682 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:35.682 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:35.682 06:37:48 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:35.682 06:37:48 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:11:35.941 ===================================================== 00:11:35.941 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:35.941 ===================================================== 00:11:35.941 Controller Capabilities/Features 00:11:35.941 ================================ 00:11:35.941 Vendor ID: 1b36 00:11:35.941 Subsystem Vendor ID: 1af4 00:11:35.941 Serial Number: 12340 00:11:35.941 Model Number: QEMU NVMe Ctrl 00:11:35.941 Firmware Version: 8.0.0 00:11:35.941 Recommended Arb Burst: 6 00:11:35.941 IEEE OUI Identifier: 00 54 52 00:11:35.941 Multi-path I/O 00:11:35.941 May have multiple subsystem ports: No 00:11:35.941 May have multiple controllers: No 00:11:35.941 Associated with SR-IOV VF: No 00:11:35.941 Max Data Transfer Size: 524288 00:11:35.941 Max Number of Namespaces: 256 00:11:35.941 Max Number of I/O Queues: 64 00:11:35.941 NVMe Specification Version (VS): 1.4 00:11:35.941 NVMe Specification Version (Identify): 1.4 00:11:35.941 Maximum Queue Entries: 2048 00:11:35.941 Contiguous Queues Required: Yes 00:11:35.941 Arbitration Mechanisms Supported 00:11:35.941 Weighted Round Robin: Not Supported 00:11:35.941 Vendor Specific: Not Supported 00:11:35.941 Reset Timeout: 7500 ms 00:11:35.941 Doorbell Stride: 4 bytes 00:11:35.941 NVM Subsystem Reset: Not Supported 00:11:35.941 Command Sets Supported 00:11:35.941 NVM Command Set: Supported 00:11:35.941 Boot Partition: Not Supported 00:11:35.941 Memory Page Size Minimum: 4096 bytes 00:11:35.941 Memory Page Size Maximum: 65536 bytes 00:11:35.941 Persistent Memory Region: Not Supported 00:11:35.941 Optional Asynchronous Events Supported 00:11:35.941 Namespace Attribute Notices: Supported 00:11:35.941 Firmware Activation Notices: Not Supported 00:11:35.941 ANA Change Notices: Not Supported 00:11:35.941 PLE Aggregate Log Change Notices: Not Supported 00:11:35.941 LBA Status Info Alert Notices: Not Supported 00:11:35.941 EGE Aggregate Log Change Notices: Not Supported 00:11:35.941 Normal NVM Subsystem Shutdown event: Not Supported 00:11:35.941 Zone Descriptor Change Notices: Not Supported 00:11:35.941 Discovery Log Change Notices: Not Supported 00:11:35.941 Controller Attributes 00:11:35.941 128-bit Host Identifier: Not Supported 00:11:35.941 Non-Operational Permissive Mode: Not Supported 00:11:35.941 NVM Sets: Not Supported 00:11:35.941 Read Recovery Levels: Not Supported 00:11:35.941 Endurance Groups: Not Supported 00:11:35.941 Predictable Latency Mode: Not Supported 00:11:35.941 Traffic Based Keep ALive: Not Supported 00:11:35.941 Namespace Granularity: Not Supported 00:11:35.941 SQ Associations: Not Supported 00:11:35.941 UUID List: Not Supported 00:11:35.941 Multi-Domain Subsystem: Not Supported 00:11:35.941 Fixed Capacity Management: Not Supported 00:11:35.941 Variable Capacity Management: Not Supported 00:11:35.941 Delete Endurance Group: Not Supported 00:11:35.941 Delete NVM Set: Not Supported 00:11:35.941 Extended LBA Formats Supported: Supported 00:11:35.941 Flexible Data Placement Supported: Not Supported 00:11:35.941 00:11:35.941 Controller Memory Buffer Support 00:11:35.941 ================================ 00:11:35.941 Supported: No 00:11:35.941 00:11:35.941 Persistent Memory Region Support 00:11:35.941 ================================ 00:11:35.941 Supported: No 00:11:35.941 00:11:35.941 Admin Command Set Attributes 00:11:35.941 ============================ 00:11:35.941 Security Send/Receive: Not Supported 00:11:35.941 Format NVM: Supported 00:11:35.941 Firmware Activate/Download: Not Supported 00:11:35.941 Namespace Management: Supported 00:11:35.941 Device Self-Test: Not Supported 00:11:35.941 Directives: Supported 00:11:35.941 NVMe-MI: Not Supported 00:11:35.941 Virtualization Management: Not Supported 00:11:35.942 Doorbell Buffer Config: Supported 00:11:35.942 Get LBA Status Capability: Not Supported 00:11:35.942 Command & Feature Lockdown Capability: Not Supported 00:11:35.942 Abort Command Limit: 4 00:11:35.942 Async Event Request Limit: 4 00:11:35.942 Number of Firmware Slots: N/A 00:11:35.942 Firmware Slot 1 Read-Only: N/A 00:11:35.942 Firmware Activation Without Reset: N/A 00:11:35.942 Multiple Update Detection Support: N/A 00:11:35.942 Firmware Update Granularity: No Information Provided 00:11:35.942 Per-Namespace SMART Log: Yes 00:11:35.942 Asymmetric Namespace Access Log Page: Not Supported 00:11:35.942 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:11:35.942 Command Effects Log Page: Supported 00:11:35.942 Get Log Page Extended Data: Supported 00:11:35.942 Telemetry Log Pages: Not Supported 00:11:35.942 Persistent Event Log Pages: Not Supported 00:11:35.942 Supported Log Pages Log Page: May Support 00:11:35.942 Commands Supported & Effects Log Page: Not Supported 00:11:35.942 Feature Identifiers & Effects Log Page:May Support 00:11:35.942 NVMe-MI Commands & Effects Log Page: May Support 00:11:35.942 Data Area 4 for Telemetry Log: Not Supported 00:11:35.942 Error Log Page Entries Supported: 1 00:11:35.942 Keep Alive: Not Supported 00:11:35.942 00:11:35.942 NVM Command Set Attributes 00:11:35.942 ========================== 00:11:35.942 Submission Queue Entry Size 00:11:35.942 Max: 64 00:11:35.942 Min: 64 00:11:35.942 Completion Queue Entry Size 00:11:35.942 Max: 16 00:11:35.942 Min: 16 00:11:35.942 Number of Namespaces: 256 00:11:35.942 Compare Command: Supported 00:11:35.942 Write Uncorrectable Command: Not Supported 00:11:35.942 Dataset Management Command: Supported 00:11:35.942 Write Zeroes Command: Supported 00:11:35.942 Set Features Save Field: Supported 00:11:35.942 Reservations: Not Supported 00:11:35.942 Timestamp: Supported 00:11:35.942 Copy: Supported 00:11:35.942 Volatile Write Cache: Present 00:11:35.942 Atomic Write Unit (Normal): 1 00:11:35.942 Atomic Write Unit (PFail): 1 00:11:35.942 Atomic Compare & Write Unit: 1 00:11:35.942 Fused Compare & Write: Not Supported 00:11:35.942 Scatter-Gather List 00:11:35.942 SGL Command Set: Supported 00:11:35.942 SGL Keyed: Not Supported 00:11:35.942 SGL Bit Bucket Descriptor: Not Supported 00:11:35.942 SGL Metadata Pointer: Not Supported 00:11:35.942 Oversized SGL: Not Supported 00:11:35.942 SGL Metadata Address: Not Supported 00:11:35.942 SGL Offset: Not Supported 00:11:35.942 Transport SGL Data Block: Not Supported 00:11:35.942 Replay Protected Memory Block: Not Supported 00:11:35.942 00:11:35.942 Firmware Slot Information 00:11:35.942 ========================= 00:11:35.942 Active slot: 1 00:11:35.942 Slot 1 Firmware Revision: 1.0 00:11:35.942 00:11:35.942 00:11:35.942 Commands Supported and Effects 00:11:35.942 ============================== 00:11:35.942 Admin Commands 00:11:35.942 -------------- 00:11:35.942 Delete I/O Submission Queue (00h): Supported 00:11:35.942 Create I/O Submission Queue (01h): Supported 00:11:35.942 Get Log Page (02h): Supported 00:11:35.942 Delete I/O Completion Queue (04h): Supported 00:11:35.942 Create I/O Completion Queue (05h): Supported 00:11:35.942 Identify (06h): Supported 00:11:35.942 Abort (08h): Supported 00:11:35.942 Set Features (09h): Supported 00:11:35.942 Get Features (0Ah): Supported 00:11:35.942 Asynchronous Event Request (0Ch): Supported 00:11:35.942 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:35.942 Directive Send (19h): Supported 00:11:35.942 Directive Receive (1Ah): Supported 00:11:35.942 Virtualization Management (1Ch): Supported 00:11:35.942 Doorbell Buffer Config (7Ch): Supported 00:11:35.942 Format NVM (80h): Supported LBA-Change 00:11:35.942 I/O Commands 00:11:35.942 ------------ 00:11:35.942 Flush (00h): Supported LBA-Change 00:11:35.942 Write (01h): Supported LBA-Change 00:11:35.942 Read (02h): Supported 00:11:35.942 Compare (05h): Supported 00:11:35.942 Write Zeroes (08h): Supported LBA-Change 00:11:35.942 Dataset Management (09h): Supported LBA-Change 00:11:35.942 Unknown (0Ch): Supported 00:11:35.942 Unknown (12h): Supported 00:11:35.942 Copy (19h): Supported LBA-Change 00:11:35.942 Unknown (1Dh): Supported LBA-Change 00:11:35.942 00:11:35.942 Error Log 00:11:35.942 ========= 00:11:35.942 00:11:35.942 Arbitration 00:11:35.942 =========== 00:11:35.942 Arbitration Burst: no limit 00:11:35.942 00:11:35.942 Power Management 00:11:35.942 ================ 00:11:35.942 Number of Power States: 1 00:11:35.942 Current Power State: Power State #0 00:11:35.942 Power State #0: 00:11:35.942 Max Power: 25.00 W 00:11:35.942 Non-Operational State: Operational 00:11:35.942 Entry Latency: 16 microseconds 00:11:35.942 Exit Latency: 4 microseconds 00:11:35.942 Relative Read Throughput: 0 00:11:35.942 Relative Read Latency: 0 00:11:35.942 Relative Write Throughput: 0 00:11:35.942 Relative Write Latency: 0 00:11:35.942 Idle Power: Not Reported 00:11:35.942 Active Power: Not Reported 00:11:35.942 Non-Operational Permissive Mode: Not Supported 00:11:35.942 00:11:35.942 Health Information 00:11:35.942 ================== 00:11:35.942 Critical Warnings: 00:11:35.942 Available Spare Space: OK 00:11:35.942 Temperature: OK 00:11:35.942 Device Reliability: OK 00:11:35.942 Read Only: No 00:11:35.942 Volatile Memory Backup: OK 00:11:35.942 Current Temperature: 323 Kelvin (50 Celsius) 00:11:35.942 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:35.942 Available Spare: 0% 00:11:35.942 Available Spare Threshold: 0% 00:11:35.942 Life Percentage Used: 0% 00:11:35.942 Data Units Read: 638 00:11:35.942 Data Units Written: 566 00:11:35.942 Host Read Commands: 34165 00:11:35.942 Host Write Commands: 33951 00:11:35.942 Controller Busy Time: 0 minutes 00:11:35.942 Power Cycles: 0 00:11:35.942 Power On Hours: 0 hours 00:11:35.942 Unsafe Shutdowns: 0 00:11:35.942 Unrecoverable Media Errors: 0 00:11:35.942 Lifetime Error Log Entries: 0 00:11:35.942 Warning Temperature Time: 0 minutes 00:11:35.942 Critical Temperature Time: 0 minutes 00:11:35.942 00:11:35.942 Number of Queues 00:11:35.942 ================ 00:11:35.942 Number of I/O Submission Queues: 64 00:11:35.942 Number of I/O Completion Queues: 64 00:11:35.942 00:11:35.942 ZNS Specific Controller Data 00:11:35.942 ============================ 00:11:35.942 Zone Append Size Limit: 0 00:11:35.942 00:11:35.942 00:11:35.942 Active Namespaces 00:11:35.942 ================= 00:11:35.942 Namespace ID:1 00:11:35.942 Error Recovery Timeout: Unlimited 00:11:35.942 Command Set Identifier: NVM (00h) 00:11:35.942 Deallocate: Supported 00:11:35.942 Deallocated/Unwritten Error: Supported 00:11:35.942 Deallocated Read Value: All 0x00 00:11:35.942 Deallocate in Write Zeroes: Not Supported 00:11:35.942 Deallocated Guard Field: 0xFFFF 00:11:35.942 Flush: Supported 00:11:35.942 Reservation: Not Supported 00:11:35.942 Metadata Transferred as: Separate Metadata Buffer 00:11:35.942 Namespace Sharing Capabilities: Private 00:11:35.942 Size (in LBAs): 1548666 (5GiB) 00:11:35.942 Capacity (in LBAs): 1548666 (5GiB) 00:11:35.942 Utilization (in LBAs): 1548666 (5GiB) 00:11:35.942 Thin Provisioning: Not Supported 00:11:35.942 Per-NS Atomic Units: No 00:11:35.942 Maximum Single Source Range Length: 128 00:11:35.942 Maximum Copy Length: 128 00:11:35.942 Maximum Source Range Count: 128 00:11:35.942 NGUID/EUI64 Never Reused: No 00:11:35.942 Namespace Write Protected: No 00:11:35.942 Number of LBA Formats: 8 00:11:35.942 Current LBA Format: LBA Format #07 00:11:35.942 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:35.942 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:35.942 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:35.942 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:35.942 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:35.942 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:35.942 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:35.942 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:35.942 00:11:35.942 NVM Specific Namespace Data 00:11:35.942 =========================== 00:11:35.942 Logical Block Storage Tag Mask: 0 00:11:35.942 Protection Information Capabilities: 00:11:35.942 16b Guard Protection Information Storage Tag Support: No 00:11:35.942 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:35.942 Storage Tag Check Read Support: No 00:11:35.942 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:35.942 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:35.942 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:35.943 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:35.943 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:35.943 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:35.943 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:35.943 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:35.943 06:37:48 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:35.943 06:37:48 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:11:36.205 ===================================================== 00:11:36.205 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:36.205 ===================================================== 00:11:36.205 Controller Capabilities/Features 00:11:36.205 ================================ 00:11:36.205 Vendor ID: 1b36 00:11:36.205 Subsystem Vendor ID: 1af4 00:11:36.205 Serial Number: 12341 00:11:36.205 Model Number: QEMU NVMe Ctrl 00:11:36.205 Firmware Version: 8.0.0 00:11:36.205 Recommended Arb Burst: 6 00:11:36.205 IEEE OUI Identifier: 00 54 52 00:11:36.205 Multi-path I/O 00:11:36.205 May have multiple subsystem ports: No 00:11:36.205 May have multiple controllers: No 00:11:36.205 Associated with SR-IOV VF: No 00:11:36.205 Max Data Transfer Size: 524288 00:11:36.205 Max Number of Namespaces: 256 00:11:36.205 Max Number of I/O Queues: 64 00:11:36.205 NVMe Specification Version (VS): 1.4 00:11:36.205 NVMe Specification Version (Identify): 1.4 00:11:36.205 Maximum Queue Entries: 2048 00:11:36.205 Contiguous Queues Required: Yes 00:11:36.205 Arbitration Mechanisms Supported 00:11:36.205 Weighted Round Robin: Not Supported 00:11:36.205 Vendor Specific: Not Supported 00:11:36.205 Reset Timeout: 7500 ms 00:11:36.205 Doorbell Stride: 4 bytes 00:11:36.205 NVM Subsystem Reset: Not Supported 00:11:36.205 Command Sets Supported 00:11:36.205 NVM Command Set: Supported 00:11:36.205 Boot Partition: Not Supported 00:11:36.205 Memory Page Size Minimum: 4096 bytes 00:11:36.205 Memory Page Size Maximum: 65536 bytes 00:11:36.205 Persistent Memory Region: Not Supported 00:11:36.205 Optional Asynchronous Events Supported 00:11:36.205 Namespace Attribute Notices: Supported 00:11:36.205 Firmware Activation Notices: Not Supported 00:11:36.205 ANA Change Notices: Not Supported 00:11:36.205 PLE Aggregate Log Change Notices: Not Supported 00:11:36.205 LBA Status Info Alert Notices: Not Supported 00:11:36.205 EGE Aggregate Log Change Notices: Not Supported 00:11:36.205 Normal NVM Subsystem Shutdown event: Not Supported 00:11:36.205 Zone Descriptor Change Notices: Not Supported 00:11:36.205 Discovery Log Change Notices: Not Supported 00:11:36.205 Controller Attributes 00:11:36.205 128-bit Host Identifier: Not Supported 00:11:36.205 Non-Operational Permissive Mode: Not Supported 00:11:36.205 NVM Sets: Not Supported 00:11:36.205 Read Recovery Levels: Not Supported 00:11:36.205 Endurance Groups: Not Supported 00:11:36.205 Predictable Latency Mode: Not Supported 00:11:36.205 Traffic Based Keep ALive: Not Supported 00:11:36.205 Namespace Granularity: Not Supported 00:11:36.205 SQ Associations: Not Supported 00:11:36.205 UUID List: Not Supported 00:11:36.205 Multi-Domain Subsystem: Not Supported 00:11:36.205 Fixed Capacity Management: Not Supported 00:11:36.205 Variable Capacity Management: Not Supported 00:11:36.205 Delete Endurance Group: Not Supported 00:11:36.205 Delete NVM Set: Not Supported 00:11:36.205 Extended LBA Formats Supported: Supported 00:11:36.205 Flexible Data Placement Supported: Not Supported 00:11:36.205 00:11:36.205 Controller Memory Buffer Support 00:11:36.205 ================================ 00:11:36.205 Supported: No 00:11:36.205 00:11:36.205 Persistent Memory Region Support 00:11:36.205 ================================ 00:11:36.205 Supported: No 00:11:36.205 00:11:36.205 Admin Command Set Attributes 00:11:36.205 ============================ 00:11:36.205 Security Send/Receive: Not Supported 00:11:36.205 Format NVM: Supported 00:11:36.205 Firmware Activate/Download: Not Supported 00:11:36.205 Namespace Management: Supported 00:11:36.205 Device Self-Test: Not Supported 00:11:36.205 Directives: Supported 00:11:36.205 NVMe-MI: Not Supported 00:11:36.205 Virtualization Management: Not Supported 00:11:36.205 Doorbell Buffer Config: Supported 00:11:36.205 Get LBA Status Capability: Not Supported 00:11:36.205 Command & Feature Lockdown Capability: Not Supported 00:11:36.205 Abort Command Limit: 4 00:11:36.205 Async Event Request Limit: 4 00:11:36.205 Number of Firmware Slots: N/A 00:11:36.205 Firmware Slot 1 Read-Only: N/A 00:11:36.205 Firmware Activation Without Reset: N/A 00:11:36.205 Multiple Update Detection Support: N/A 00:11:36.205 Firmware Update Granularity: No Information Provided 00:11:36.205 Per-Namespace SMART Log: Yes 00:11:36.206 Asymmetric Namespace Access Log Page: Not Supported 00:11:36.206 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:11:36.206 Command Effects Log Page: Supported 00:11:36.206 Get Log Page Extended Data: Supported 00:11:36.206 Telemetry Log Pages: Not Supported 00:11:36.206 Persistent Event Log Pages: Not Supported 00:11:36.206 Supported Log Pages Log Page: May Support 00:11:36.206 Commands Supported & Effects Log Page: Not Supported 00:11:36.206 Feature Identifiers & Effects Log Page:May Support 00:11:36.206 NVMe-MI Commands & Effects Log Page: May Support 00:11:36.206 Data Area 4 for Telemetry Log: Not Supported 00:11:36.206 Error Log Page Entries Supported: 1 00:11:36.206 Keep Alive: Not Supported 00:11:36.206 00:11:36.206 NVM Command Set Attributes 00:11:36.206 ========================== 00:11:36.206 Submission Queue Entry Size 00:11:36.206 Max: 64 00:11:36.206 Min: 64 00:11:36.206 Completion Queue Entry Size 00:11:36.206 Max: 16 00:11:36.206 Min: 16 00:11:36.206 Number of Namespaces: 256 00:11:36.206 Compare Command: Supported 00:11:36.206 Write Uncorrectable Command: Not Supported 00:11:36.206 Dataset Management Command: Supported 00:11:36.206 Write Zeroes Command: Supported 00:11:36.206 Set Features Save Field: Supported 00:11:36.206 Reservations: Not Supported 00:11:36.206 Timestamp: Supported 00:11:36.206 Copy: Supported 00:11:36.206 Volatile Write Cache: Present 00:11:36.206 Atomic Write Unit (Normal): 1 00:11:36.206 Atomic Write Unit (PFail): 1 00:11:36.206 Atomic Compare & Write Unit: 1 00:11:36.206 Fused Compare & Write: Not Supported 00:11:36.206 Scatter-Gather List 00:11:36.206 SGL Command Set: Supported 00:11:36.206 SGL Keyed: Not Supported 00:11:36.206 SGL Bit Bucket Descriptor: Not Supported 00:11:36.206 SGL Metadata Pointer: Not Supported 00:11:36.206 Oversized SGL: Not Supported 00:11:36.206 SGL Metadata Address: Not Supported 00:11:36.206 SGL Offset: Not Supported 00:11:36.206 Transport SGL Data Block: Not Supported 00:11:36.206 Replay Protected Memory Block: Not Supported 00:11:36.206 00:11:36.206 Firmware Slot Information 00:11:36.206 ========================= 00:11:36.206 Active slot: 1 00:11:36.206 Slot 1 Firmware Revision: 1.0 00:11:36.206 00:11:36.206 00:11:36.206 Commands Supported and Effects 00:11:36.206 ============================== 00:11:36.206 Admin Commands 00:11:36.206 -------------- 00:11:36.206 Delete I/O Submission Queue (00h): Supported 00:11:36.206 Create I/O Submission Queue (01h): Supported 00:11:36.206 Get Log Page (02h): Supported 00:11:36.206 Delete I/O Completion Queue (04h): Supported 00:11:36.206 Create I/O Completion Queue (05h): Supported 00:11:36.206 Identify (06h): Supported 00:11:36.206 Abort (08h): Supported 00:11:36.206 Set Features (09h): Supported 00:11:36.206 Get Features (0Ah): Supported 00:11:36.206 Asynchronous Event Request (0Ch): Supported 00:11:36.206 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:36.206 Directive Send (19h): Supported 00:11:36.206 Directive Receive (1Ah): Supported 00:11:36.206 Virtualization Management (1Ch): Supported 00:11:36.206 Doorbell Buffer Config (7Ch): Supported 00:11:36.206 Format NVM (80h): Supported LBA-Change 00:11:36.206 I/O Commands 00:11:36.206 ------------ 00:11:36.206 Flush (00h): Supported LBA-Change 00:11:36.206 Write (01h): Supported LBA-Change 00:11:36.206 Read (02h): Supported 00:11:36.206 Compare (05h): Supported 00:11:36.206 Write Zeroes (08h): Supported LBA-Change 00:11:36.206 Dataset Management (09h): Supported LBA-Change 00:11:36.206 Unknown (0Ch): Supported 00:11:36.206 Unknown (12h): Supported 00:11:36.206 Copy (19h): Supported LBA-Change 00:11:36.206 Unknown (1Dh): Supported LBA-Change 00:11:36.206 00:11:36.206 Error Log 00:11:36.206 ========= 00:11:36.206 00:11:36.206 Arbitration 00:11:36.206 =========== 00:11:36.206 Arbitration Burst: no limit 00:11:36.206 00:11:36.206 Power Management 00:11:36.206 ================ 00:11:36.206 Number of Power States: 1 00:11:36.206 Current Power State: Power State #0 00:11:36.206 Power State #0: 00:11:36.206 Max Power: 25.00 W 00:11:36.206 Non-Operational State: Operational 00:11:36.206 Entry Latency: 16 microseconds 00:11:36.206 Exit Latency: 4 microseconds 00:11:36.206 Relative Read Throughput: 0 00:11:36.206 Relative Read Latency: 0 00:11:36.206 Relative Write Throughput: 0 00:11:36.206 Relative Write Latency: 0 00:11:36.206 Idle Power: Not Reported 00:11:36.206 Active Power: Not Reported 00:11:36.206 Non-Operational Permissive Mode: Not Supported 00:11:36.206 00:11:36.206 Health Information 00:11:36.206 ================== 00:11:36.206 Critical Warnings: 00:11:36.206 Available Spare Space: OK 00:11:36.206 Temperature: OK 00:11:36.206 Device Reliability: OK 00:11:36.206 Read Only: No 00:11:36.206 Volatile Memory Backup: OK 00:11:36.206 Current Temperature: 323 Kelvin (50 Celsius) 00:11:36.207 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:36.207 Available Spare: 0% 00:11:36.207 Available Spare Threshold: 0% 00:11:36.207 Life Percentage Used: 0% 00:11:36.207 Data Units Read: 1017 00:11:36.207 Data Units Written: 887 00:11:36.207 Host Read Commands: 52933 00:11:36.207 Host Write Commands: 51820 00:11:36.207 Controller Busy Time: 0 minutes 00:11:36.207 Power Cycles: 0 00:11:36.207 Power On Hours: 0 hours 00:11:36.207 Unsafe Shutdowns: 0 00:11:36.207 Unrecoverable Media Errors: 0 00:11:36.207 Lifetime Error Log Entries: 0 00:11:36.207 Warning Temperature Time: 0 minutes 00:11:36.207 Critical Temperature Time: 0 minutes 00:11:36.207 00:11:36.207 Number of Queues 00:11:36.207 ================ 00:11:36.207 Number of I/O Submission Queues: 64 00:11:36.207 Number of I/O Completion Queues: 64 00:11:36.207 00:11:36.207 ZNS Specific Controller Data 00:11:36.207 ============================ 00:11:36.207 Zone Append Size Limit: 0 00:11:36.207 00:11:36.207 00:11:36.207 Active Namespaces 00:11:36.207 ================= 00:11:36.207 Namespace ID:1 00:11:36.207 Error Recovery Timeout: Unlimited 00:11:36.207 Command Set Identifier: NVM (00h) 00:11:36.207 Deallocate: Supported 00:11:36.207 Deallocated/Unwritten Error: Supported 00:11:36.207 Deallocated Read Value: All 0x00 00:11:36.207 Deallocate in Write Zeroes: Not Supported 00:11:36.207 Deallocated Guard Field: 0xFFFF 00:11:36.207 Flush: Supported 00:11:36.207 Reservation: Not Supported 00:11:36.207 Namespace Sharing Capabilities: Private 00:11:36.207 Size (in LBAs): 1310720 (5GiB) 00:11:36.207 Capacity (in LBAs): 1310720 (5GiB) 00:11:36.207 Utilization (in LBAs): 1310720 (5GiB) 00:11:36.207 Thin Provisioning: Not Supported 00:11:36.207 Per-NS Atomic Units: No 00:11:36.207 Maximum Single Source Range Length: 128 00:11:36.207 Maximum Copy Length: 128 00:11:36.207 Maximum Source Range Count: 128 00:11:36.207 NGUID/EUI64 Never Reused: No 00:11:36.207 Namespace Write Protected: No 00:11:36.207 Number of LBA Formats: 8 00:11:36.207 Current LBA Format: LBA Format #04 00:11:36.207 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:36.207 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:36.207 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:36.207 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:36.207 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:36.207 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:36.207 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:36.207 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:36.207 00:11:36.207 NVM Specific Namespace Data 00:11:36.207 =========================== 00:11:36.207 Logical Block Storage Tag Mask: 0 00:11:36.207 Protection Information Capabilities: 00:11:36.207 16b Guard Protection Information Storage Tag Support: No 00:11:36.207 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:36.207 Storage Tag Check Read Support: No 00:11:36.207 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:36.207 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:36.207 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:36.207 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:36.207 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:36.207 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:36.207 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:36.207 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:36.207 06:37:48 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:36.207 06:37:48 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:11:36.467 ===================================================== 00:11:36.468 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:36.468 ===================================================== 00:11:36.468 Controller Capabilities/Features 00:11:36.468 ================================ 00:11:36.468 Vendor ID: 1b36 00:11:36.468 Subsystem Vendor ID: 1af4 00:11:36.468 Serial Number: 12342 00:11:36.468 Model Number: QEMU NVMe Ctrl 00:11:36.468 Firmware Version: 8.0.0 00:11:36.468 Recommended Arb Burst: 6 00:11:36.468 IEEE OUI Identifier: 00 54 52 00:11:36.468 Multi-path I/O 00:11:36.468 May have multiple subsystem ports: No 00:11:36.468 May have multiple controllers: No 00:11:36.468 Associated with SR-IOV VF: No 00:11:36.468 Max Data Transfer Size: 524288 00:11:36.468 Max Number of Namespaces: 256 00:11:36.468 Max Number of I/O Queues: 64 00:11:36.468 NVMe Specification Version (VS): 1.4 00:11:36.468 NVMe Specification Version (Identify): 1.4 00:11:36.468 Maximum Queue Entries: 2048 00:11:36.468 Contiguous Queues Required: Yes 00:11:36.468 Arbitration Mechanisms Supported 00:11:36.468 Weighted Round Robin: Not Supported 00:11:36.468 Vendor Specific: Not Supported 00:11:36.468 Reset Timeout: 7500 ms 00:11:36.468 Doorbell Stride: 4 bytes 00:11:36.468 NVM Subsystem Reset: Not Supported 00:11:36.468 Command Sets Supported 00:11:36.468 NVM Command Set: Supported 00:11:36.468 Boot Partition: Not Supported 00:11:36.468 Memory Page Size Minimum: 4096 bytes 00:11:36.468 Memory Page Size Maximum: 65536 bytes 00:11:36.468 Persistent Memory Region: Not Supported 00:11:36.468 Optional Asynchronous Events Supported 00:11:36.468 Namespace Attribute Notices: Supported 00:11:36.468 Firmware Activation Notices: Not Supported 00:11:36.468 ANA Change Notices: Not Supported 00:11:36.468 PLE Aggregate Log Change Notices: Not Supported 00:11:36.468 LBA Status Info Alert Notices: Not Supported 00:11:36.468 EGE Aggregate Log Change Notices: Not Supported 00:11:36.468 Normal NVM Subsystem Shutdown event: Not Supported 00:11:36.468 Zone Descriptor Change Notices: Not Supported 00:11:36.468 Discovery Log Change Notices: Not Supported 00:11:36.468 Controller Attributes 00:11:36.468 128-bit Host Identifier: Not Supported 00:11:36.468 Non-Operational Permissive Mode: Not Supported 00:11:36.468 NVM Sets: Not Supported 00:11:36.468 Read Recovery Levels: Not Supported 00:11:36.468 Endurance Groups: Not Supported 00:11:36.468 Predictable Latency Mode: Not Supported 00:11:36.468 Traffic Based Keep ALive: Not Supported 00:11:36.468 Namespace Granularity: Not Supported 00:11:36.468 SQ Associations: Not Supported 00:11:36.468 UUID List: Not Supported 00:11:36.468 Multi-Domain Subsystem: Not Supported 00:11:36.468 Fixed Capacity Management: Not Supported 00:11:36.468 Variable Capacity Management: Not Supported 00:11:36.468 Delete Endurance Group: Not Supported 00:11:36.468 Delete NVM Set: Not Supported 00:11:36.468 Extended LBA Formats Supported: Supported 00:11:36.468 Flexible Data Placement Supported: Not Supported 00:11:36.468 00:11:36.468 Controller Memory Buffer Support 00:11:36.468 ================================ 00:11:36.468 Supported: No 00:11:36.468 00:11:36.468 Persistent Memory Region Support 00:11:36.468 ================================ 00:11:36.468 Supported: No 00:11:36.468 00:11:36.468 Admin Command Set Attributes 00:11:36.468 ============================ 00:11:36.468 Security Send/Receive: Not Supported 00:11:36.468 Format NVM: Supported 00:11:36.468 Firmware Activate/Download: Not Supported 00:11:36.468 Namespace Management: Supported 00:11:36.468 Device Self-Test: Not Supported 00:11:36.468 Directives: Supported 00:11:36.468 NVMe-MI: Not Supported 00:11:36.468 Virtualization Management: Not Supported 00:11:36.468 Doorbell Buffer Config: Supported 00:11:36.468 Get LBA Status Capability: Not Supported 00:11:36.468 Command & Feature Lockdown Capability: Not Supported 00:11:36.468 Abort Command Limit: 4 00:11:36.468 Async Event Request Limit: 4 00:11:36.468 Number of Firmware Slots: N/A 00:11:36.468 Firmware Slot 1 Read-Only: N/A 00:11:36.468 Firmware Activation Without Reset: N/A 00:11:36.468 Multiple Update Detection Support: N/A 00:11:36.468 Firmware Update Granularity: No Information Provided 00:11:36.468 Per-Namespace SMART Log: Yes 00:11:36.468 Asymmetric Namespace Access Log Page: Not Supported 00:11:36.468 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:11:36.468 Command Effects Log Page: Supported 00:11:36.468 Get Log Page Extended Data: Supported 00:11:36.468 Telemetry Log Pages: Not Supported 00:11:36.468 Persistent Event Log Pages: Not Supported 00:11:36.468 Supported Log Pages Log Page: May Support 00:11:36.468 Commands Supported & Effects Log Page: Not Supported 00:11:36.468 Feature Identifiers & Effects Log Page:May Support 00:11:36.468 NVMe-MI Commands & Effects Log Page: May Support 00:11:36.468 Data Area 4 for Telemetry Log: Not Supported 00:11:36.468 Error Log Page Entries Supported: 1 00:11:36.468 Keep Alive: Not Supported 00:11:36.468 00:11:36.468 NVM Command Set Attributes 00:11:36.468 ========================== 00:11:36.468 Submission Queue Entry Size 00:11:36.468 Max: 64 00:11:36.468 Min: 64 00:11:36.468 Completion Queue Entry Size 00:11:36.468 Max: 16 00:11:36.468 Min: 16 00:11:36.468 Number of Namespaces: 256 00:11:36.468 Compare Command: Supported 00:11:36.468 Write Uncorrectable Command: Not Supported 00:11:36.468 Dataset Management Command: Supported 00:11:36.468 Write Zeroes Command: Supported 00:11:36.468 Set Features Save Field: Supported 00:11:36.468 Reservations: Not Supported 00:11:36.468 Timestamp: Supported 00:11:36.468 Copy: Supported 00:11:36.468 Volatile Write Cache: Present 00:11:36.468 Atomic Write Unit (Normal): 1 00:11:36.468 Atomic Write Unit (PFail): 1 00:11:36.468 Atomic Compare & Write Unit: 1 00:11:36.468 Fused Compare & Write: Not Supported 00:11:36.468 Scatter-Gather List 00:11:36.468 SGL Command Set: Supported 00:11:36.468 SGL Keyed: Not Supported 00:11:36.468 SGL Bit Bucket Descriptor: Not Supported 00:11:36.468 SGL Metadata Pointer: Not Supported 00:11:36.468 Oversized SGL: Not Supported 00:11:36.468 SGL Metadata Address: Not Supported 00:11:36.468 SGL Offset: Not Supported 00:11:36.468 Transport SGL Data Block: Not Supported 00:11:36.468 Replay Protected Memory Block: Not Supported 00:11:36.468 00:11:36.468 Firmware Slot Information 00:11:36.468 ========================= 00:11:36.468 Active slot: 1 00:11:36.468 Slot 1 Firmware Revision: 1.0 00:11:36.468 00:11:36.468 00:11:36.468 Commands Supported and Effects 00:11:36.468 ============================== 00:11:36.468 Admin Commands 00:11:36.468 -------------- 00:11:36.468 Delete I/O Submission Queue (00h): Supported 00:11:36.468 Create I/O Submission Queue (01h): Supported 00:11:36.468 Get Log Page (02h): Supported 00:11:36.468 Delete I/O Completion Queue (04h): Supported 00:11:36.468 Create I/O Completion Queue (05h): Supported 00:11:36.468 Identify (06h): Supported 00:11:36.468 Abort (08h): Supported 00:11:36.468 Set Features (09h): Supported 00:11:36.468 Get Features (0Ah): Supported 00:11:36.468 Asynchronous Event Request (0Ch): Supported 00:11:36.468 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:36.468 Directive Send (19h): Supported 00:11:36.468 Directive Receive (1Ah): Supported 00:11:36.468 Virtualization Management (1Ch): Supported 00:11:36.468 Doorbell Buffer Config (7Ch): Supported 00:11:36.468 Format NVM (80h): Supported LBA-Change 00:11:36.468 I/O Commands 00:11:36.468 ------------ 00:11:36.468 Flush (00h): Supported LBA-Change 00:11:36.468 Write (01h): Supported LBA-Change 00:11:36.468 Read (02h): Supported 00:11:36.468 Compare (05h): Supported 00:11:36.468 Write Zeroes (08h): Supported LBA-Change 00:11:36.468 Dataset Management (09h): Supported LBA-Change 00:11:36.468 Unknown (0Ch): Supported 00:11:36.468 Unknown (12h): Supported 00:11:36.468 Copy (19h): Supported LBA-Change 00:11:36.468 Unknown (1Dh): Supported LBA-Change 00:11:36.468 00:11:36.468 Error Log 00:11:36.468 ========= 00:11:36.468 00:11:36.468 Arbitration 00:11:36.468 =========== 00:11:36.468 Arbitration Burst: no limit 00:11:36.468 00:11:36.468 Power Management 00:11:36.468 ================ 00:11:36.468 Number of Power States: 1 00:11:36.468 Current Power State: Power State #0 00:11:36.468 Power State #0: 00:11:36.468 Max Power: 25.00 W 00:11:36.468 Non-Operational State: Operational 00:11:36.468 Entry Latency: 16 microseconds 00:11:36.468 Exit Latency: 4 microseconds 00:11:36.469 Relative Read Throughput: 0 00:11:36.469 Relative Read Latency: 0 00:11:36.469 Relative Write Throughput: 0 00:11:36.469 Relative Write Latency: 0 00:11:36.469 Idle Power: Not Reported 00:11:36.469 Active Power: Not Reported 00:11:36.469 Non-Operational Permissive Mode: Not Supported 00:11:36.469 00:11:36.469 Health Information 00:11:36.469 ================== 00:11:36.469 Critical Warnings: 00:11:36.469 Available Spare Space: OK 00:11:36.469 Temperature: OK 00:11:36.469 Device Reliability: OK 00:11:36.469 Read Only: No 00:11:36.469 Volatile Memory Backup: OK 00:11:36.469 Current Temperature: 323 Kelvin (50 Celsius) 00:11:36.469 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:36.469 Available Spare: 0% 00:11:36.469 Available Spare Threshold: 0% 00:11:36.469 Life Percentage Used: 0% 00:11:36.469 Data Units Read: 2070 00:11:36.469 Data Units Written: 1857 00:11:36.469 Host Read Commands: 104311 00:11:36.469 Host Write Commands: 102580 00:11:36.469 Controller Busy Time: 0 minutes 00:11:36.469 Power Cycles: 0 00:11:36.469 Power On Hours: 0 hours 00:11:36.469 Unsafe Shutdowns: 0 00:11:36.469 Unrecoverable Media Errors: 0 00:11:36.469 Lifetime Error Log Entries: 0 00:11:36.469 Warning Temperature Time: 0 minutes 00:11:36.469 Critical Temperature Time: 0 minutes 00:11:36.469 00:11:36.469 Number of Queues 00:11:36.469 ================ 00:11:36.469 Number of I/O Submission Queues: 64 00:11:36.469 Number of I/O Completion Queues: 64 00:11:36.469 00:11:36.469 ZNS Specific Controller Data 00:11:36.469 ============================ 00:11:36.469 Zone Append Size Limit: 0 00:11:36.469 00:11:36.469 00:11:36.469 Active Namespaces 00:11:36.469 ================= 00:11:36.469 Namespace ID:1 00:11:36.469 Error Recovery Timeout: Unlimited 00:11:36.469 Command Set Identifier: NVM (00h) 00:11:36.469 Deallocate: Supported 00:11:36.469 Deallocated/Unwritten Error: Supported 00:11:36.469 Deallocated Read Value: All 0x00 00:11:36.469 Deallocate in Write Zeroes: Not Supported 00:11:36.469 Deallocated Guard Field: 0xFFFF 00:11:36.469 Flush: Supported 00:11:36.469 Reservation: Not Supported 00:11:36.469 Namespace Sharing Capabilities: Private 00:11:36.469 Size (in LBAs): 1048576 (4GiB) 00:11:36.469 Capacity (in LBAs): 1048576 (4GiB) 00:11:36.469 Utilization (in LBAs): 1048576 (4GiB) 00:11:36.469 Thin Provisioning: Not Supported 00:11:36.469 Per-NS Atomic Units: No 00:11:36.469 Maximum Single Source Range Length: 128 00:11:36.469 Maximum Copy Length: 128 00:11:36.469 Maximum Source Range Count: 128 00:11:36.469 NGUID/EUI64 Never Reused: No 00:11:36.469 Namespace Write Protected: No 00:11:36.469 Number of LBA Formats: 8 00:11:36.469 Current LBA Format: LBA Format #04 00:11:36.469 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:36.469 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:36.469 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:36.469 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:36.469 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:36.469 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:36.469 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:36.469 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:36.469 00:11:36.469 NVM Specific Namespace Data 00:11:36.469 =========================== 00:11:36.469 Logical Block Storage Tag Mask: 0 00:11:36.469 Protection Information Capabilities: 00:11:36.469 16b Guard Protection Information Storage Tag Support: No 00:11:36.469 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:36.469 Storage Tag Check Read Support: No 00:11:36.469 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:36.469 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:36.469 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:36.469 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:36.469 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:36.469 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:36.469 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:36.469 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:36.469 Namespace ID:2 00:11:36.469 Error Recovery Timeout: Unlimited 00:11:36.469 Command Set Identifier: NVM (00h) 00:11:36.469 Deallocate: Supported 00:11:36.469 Deallocated/Unwritten Error: Supported 00:11:36.469 Deallocated Read Value: All 0x00 00:11:36.469 Deallocate in Write Zeroes: Not Supported 00:11:36.469 Deallocated Guard Field: 0xFFFF 00:11:36.469 Flush: Supported 00:11:36.469 Reservation: Not Supported 00:11:36.469 Namespace Sharing Capabilities: Private 00:11:36.469 Size (in LBAs): 1048576 (4GiB) 00:11:36.469 Capacity (in LBAs): 1048576 (4GiB) 00:11:36.469 Utilization (in LBAs): 1048576 (4GiB) 00:11:36.469 Thin Provisioning: Not Supported 00:11:36.469 Per-NS Atomic Units: No 00:11:36.469 Maximum Single Source Range Length: 128 00:11:36.469 Maximum Copy Length: 128 00:11:36.469 Maximum Source Range Count: 128 00:11:36.469 NGUID/EUI64 Never Reused: No 00:11:36.469 Namespace Write Protected: No 00:11:36.469 Number of LBA Formats: 8 00:11:36.469 Current LBA Format: LBA Format #04 00:11:36.469 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:36.469 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:36.469 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:36.469 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:36.469 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:36.469 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:36.469 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:36.469 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:36.469 00:11:36.469 NVM Specific Namespace Data 00:11:36.469 =========================== 00:11:36.469 Logical Block Storage Tag Mask: 0 00:11:36.469 Protection Information Capabilities: 00:11:36.469 16b Guard Protection Information Storage Tag Support: No 00:11:36.469 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:36.469 Storage Tag Check Read Support: No 00:11:36.469 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:36.469 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:36.469 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:36.469 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:36.469 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:36.469 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:36.469 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:36.469 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:36.469 Namespace ID:3 00:11:36.469 Error Recovery Timeout: Unlimited 00:11:36.469 Command Set Identifier: NVM (00h) 00:11:36.469 Deallocate: Supported 00:11:36.469 Deallocated/Unwritten Error: Supported 00:11:36.469 Deallocated Read Value: All 0x00 00:11:36.469 Deallocate in Write Zeroes: Not Supported 00:11:36.469 Deallocated Guard Field: 0xFFFF 00:11:36.469 Flush: Supported 00:11:36.469 Reservation: Not Supported 00:11:36.469 Namespace Sharing Capabilities: Private 00:11:36.469 Size (in LBAs): 1048576 (4GiB) 00:11:36.469 Capacity (in LBAs): 1048576 (4GiB) 00:11:36.469 Utilization (in LBAs): 1048576 (4GiB) 00:11:36.469 Thin Provisioning: Not Supported 00:11:36.469 Per-NS Atomic Units: No 00:11:36.469 Maximum Single Source Range Length: 128 00:11:36.469 Maximum Copy Length: 128 00:11:36.469 Maximum Source Range Count: 128 00:11:36.469 NGUID/EUI64 Never Reused: No 00:11:36.469 Namespace Write Protected: No 00:11:36.469 Number of LBA Formats: 8 00:11:36.469 Current LBA Format: LBA Format #04 00:11:36.469 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:36.469 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:36.469 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:36.469 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:36.469 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:36.469 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:36.469 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:36.469 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:36.469 00:11:36.469 NVM Specific Namespace Data 00:11:36.469 =========================== 00:11:36.469 Logical Block Storage Tag Mask: 0 00:11:36.469 Protection Information Capabilities: 00:11:36.469 16b Guard Protection Information Storage Tag Support: No 00:11:36.469 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:36.469 Storage Tag Check Read Support: No 00:11:36.469 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:36.469 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:36.469 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:36.469 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:36.469 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:36.470 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:36.470 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:36.470 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:36.470 06:37:49 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:36.470 06:37:49 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:11:36.728 ===================================================== 00:11:36.729 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:36.729 ===================================================== 00:11:36.729 Controller Capabilities/Features 00:11:36.729 ================================ 00:11:36.729 Vendor ID: 1b36 00:11:36.729 Subsystem Vendor ID: 1af4 00:11:36.729 Serial Number: 12343 00:11:36.729 Model Number: QEMU NVMe Ctrl 00:11:36.729 Firmware Version: 8.0.0 00:11:36.729 Recommended Arb Burst: 6 00:11:36.729 IEEE OUI Identifier: 00 54 52 00:11:36.729 Multi-path I/O 00:11:36.729 May have multiple subsystem ports: No 00:11:36.729 May have multiple controllers: Yes 00:11:36.729 Associated with SR-IOV VF: No 00:11:36.729 Max Data Transfer Size: 524288 00:11:36.729 Max Number of Namespaces: 256 00:11:36.729 Max Number of I/O Queues: 64 00:11:36.729 NVMe Specification Version (VS): 1.4 00:11:36.729 NVMe Specification Version (Identify): 1.4 00:11:36.729 Maximum Queue Entries: 2048 00:11:36.729 Contiguous Queues Required: Yes 00:11:36.729 Arbitration Mechanisms Supported 00:11:36.729 Weighted Round Robin: Not Supported 00:11:36.729 Vendor Specific: Not Supported 00:11:36.729 Reset Timeout: 7500 ms 00:11:36.729 Doorbell Stride: 4 bytes 00:11:36.729 NVM Subsystem Reset: Not Supported 00:11:36.729 Command Sets Supported 00:11:36.729 NVM Command Set: Supported 00:11:36.729 Boot Partition: Not Supported 00:11:36.729 Memory Page Size Minimum: 4096 bytes 00:11:36.729 Memory Page Size Maximum: 65536 bytes 00:11:36.729 Persistent Memory Region: Not Supported 00:11:36.729 Optional Asynchronous Events Supported 00:11:36.729 Namespace Attribute Notices: Supported 00:11:36.729 Firmware Activation Notices: Not Supported 00:11:36.729 ANA Change Notices: Not Supported 00:11:36.729 PLE Aggregate Log Change Notices: Not Supported 00:11:36.729 LBA Status Info Alert Notices: Not Supported 00:11:36.729 EGE Aggregate Log Change Notices: Not Supported 00:11:36.729 Normal NVM Subsystem Shutdown event: Not Supported 00:11:36.729 Zone Descriptor Change Notices: Not Supported 00:11:36.729 Discovery Log Change Notices: Not Supported 00:11:36.729 Controller Attributes 00:11:36.729 128-bit Host Identifier: Not Supported 00:11:36.729 Non-Operational Permissive Mode: Not Supported 00:11:36.729 NVM Sets: Not Supported 00:11:36.729 Read Recovery Levels: Not Supported 00:11:36.729 Endurance Groups: Supported 00:11:36.729 Predictable Latency Mode: Not Supported 00:11:36.729 Traffic Based Keep ALive: Not Supported 00:11:36.729 Namespace Granularity: Not Supported 00:11:36.729 SQ Associations: Not Supported 00:11:36.729 UUID List: Not Supported 00:11:36.729 Multi-Domain Subsystem: Not Supported 00:11:36.729 Fixed Capacity Management: Not Supported 00:11:36.729 Variable Capacity Management: Not Supported 00:11:36.729 Delete Endurance Group: Not Supported 00:11:36.729 Delete NVM Set: Not Supported 00:11:36.729 Extended LBA Formats Supported: Supported 00:11:36.729 Flexible Data Placement Supported: Supported 00:11:36.729 00:11:36.729 Controller Memory Buffer Support 00:11:36.729 ================================ 00:11:36.729 Supported: No 00:11:36.729 00:11:36.729 Persistent Memory Region Support 00:11:36.729 ================================ 00:11:36.729 Supported: No 00:11:36.729 00:11:36.729 Admin Command Set Attributes 00:11:36.729 ============================ 00:11:36.729 Security Send/Receive: Not Supported 00:11:36.729 Format NVM: Supported 00:11:36.729 Firmware Activate/Download: Not Supported 00:11:36.729 Namespace Management: Supported 00:11:36.729 Device Self-Test: Not Supported 00:11:36.729 Directives: Supported 00:11:36.729 NVMe-MI: Not Supported 00:11:36.729 Virtualization Management: Not Supported 00:11:36.729 Doorbell Buffer Config: Supported 00:11:36.729 Get LBA Status Capability: Not Supported 00:11:36.729 Command & Feature Lockdown Capability: Not Supported 00:11:36.729 Abort Command Limit: 4 00:11:36.729 Async Event Request Limit: 4 00:11:36.729 Number of Firmware Slots: N/A 00:11:36.729 Firmware Slot 1 Read-Only: N/A 00:11:36.729 Firmware Activation Without Reset: N/A 00:11:36.729 Multiple Update Detection Support: N/A 00:11:36.729 Firmware Update Granularity: No Information Provided 00:11:36.729 Per-Namespace SMART Log: Yes 00:11:36.729 Asymmetric Namespace Access Log Page: Not Supported 00:11:36.729 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:11:36.729 Command Effects Log Page: Supported 00:11:36.729 Get Log Page Extended Data: Supported 00:11:36.729 Telemetry Log Pages: Not Supported 00:11:36.729 Persistent Event Log Pages: Not Supported 00:11:36.729 Supported Log Pages Log Page: May Support 00:11:36.729 Commands Supported & Effects Log Page: Not Supported 00:11:36.729 Feature Identifiers & Effects Log Page:May Support 00:11:36.729 NVMe-MI Commands & Effects Log Page: May Support 00:11:36.729 Data Area 4 for Telemetry Log: Not Supported 00:11:36.729 Error Log Page Entries Supported: 1 00:11:36.729 Keep Alive: Not Supported 00:11:36.729 00:11:36.729 NVM Command Set Attributes 00:11:36.729 ========================== 00:11:36.729 Submission Queue Entry Size 00:11:36.729 Max: 64 00:11:36.729 Min: 64 00:11:36.729 Completion Queue Entry Size 00:11:36.729 Max: 16 00:11:36.729 Min: 16 00:11:36.729 Number of Namespaces: 256 00:11:36.729 Compare Command: Supported 00:11:36.729 Write Uncorrectable Command: Not Supported 00:11:36.729 Dataset Management Command: Supported 00:11:36.729 Write Zeroes Command: Supported 00:11:36.729 Set Features Save Field: Supported 00:11:36.729 Reservations: Not Supported 00:11:36.729 Timestamp: Supported 00:11:36.729 Copy: Supported 00:11:36.729 Volatile Write Cache: Present 00:11:36.729 Atomic Write Unit (Normal): 1 00:11:36.729 Atomic Write Unit (PFail): 1 00:11:36.729 Atomic Compare & Write Unit: 1 00:11:36.729 Fused Compare & Write: Not Supported 00:11:36.729 Scatter-Gather List 00:11:36.729 SGL Command Set: Supported 00:11:36.729 SGL Keyed: Not Supported 00:11:36.729 SGL Bit Bucket Descriptor: Not Supported 00:11:36.729 SGL Metadata Pointer: Not Supported 00:11:36.729 Oversized SGL: Not Supported 00:11:36.729 SGL Metadata Address: Not Supported 00:11:36.729 SGL Offset: Not Supported 00:11:36.729 Transport SGL Data Block: Not Supported 00:11:36.729 Replay Protected Memory Block: Not Supported 00:11:36.729 00:11:36.729 Firmware Slot Information 00:11:36.729 ========================= 00:11:36.729 Active slot: 1 00:11:36.729 Slot 1 Firmware Revision: 1.0 00:11:36.729 00:11:36.729 00:11:36.729 Commands Supported and Effects 00:11:36.729 ============================== 00:11:36.729 Admin Commands 00:11:36.729 -------------- 00:11:36.729 Delete I/O Submission Queue (00h): Supported 00:11:36.729 Create I/O Submission Queue (01h): Supported 00:11:36.729 Get Log Page (02h): Supported 00:11:36.729 Delete I/O Completion Queue (04h): Supported 00:11:36.729 Create I/O Completion Queue (05h): Supported 00:11:36.729 Identify (06h): Supported 00:11:36.729 Abort (08h): Supported 00:11:36.729 Set Features (09h): Supported 00:11:36.729 Get Features (0Ah): Supported 00:11:36.729 Asynchronous Event Request (0Ch): Supported 00:11:36.729 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:36.729 Directive Send (19h): Supported 00:11:36.729 Directive Receive (1Ah): Supported 00:11:36.729 Virtualization Management (1Ch): Supported 00:11:36.729 Doorbell Buffer Config (7Ch): Supported 00:11:36.729 Format NVM (80h): Supported LBA-Change 00:11:36.729 I/O Commands 00:11:36.729 ------------ 00:11:36.729 Flush (00h): Supported LBA-Change 00:11:36.729 Write (01h): Supported LBA-Change 00:11:36.729 Read (02h): Supported 00:11:36.729 Compare (05h): Supported 00:11:36.729 Write Zeroes (08h): Supported LBA-Change 00:11:36.729 Dataset Management (09h): Supported LBA-Change 00:11:36.729 Unknown (0Ch): Supported 00:11:36.729 Unknown (12h): Supported 00:11:36.729 Copy (19h): Supported LBA-Change 00:11:36.729 Unknown (1Dh): Supported LBA-Change 00:11:36.729 00:11:36.729 Error Log 00:11:36.729 ========= 00:11:36.729 00:11:36.729 Arbitration 00:11:36.729 =========== 00:11:36.729 Arbitration Burst: no limit 00:11:36.729 00:11:36.729 Power Management 00:11:36.729 ================ 00:11:36.729 Number of Power States: 1 00:11:36.729 Current Power State: Power State #0 00:11:36.729 Power State #0: 00:11:36.729 Max Power: 25.00 W 00:11:36.729 Non-Operational State: Operational 00:11:36.730 Entry Latency: 16 microseconds 00:11:36.730 Exit Latency: 4 microseconds 00:11:36.730 Relative Read Throughput: 0 00:11:36.730 Relative Read Latency: 0 00:11:36.730 Relative Write Throughput: 0 00:11:36.730 Relative Write Latency: 0 00:11:36.730 Idle Power: Not Reported 00:11:36.730 Active Power: Not Reported 00:11:36.730 Non-Operational Permissive Mode: Not Supported 00:11:36.730 00:11:36.730 Health Information 00:11:36.730 ================== 00:11:36.730 Critical Warnings: 00:11:36.730 Available Spare Space: OK 00:11:36.730 Temperature: OK 00:11:36.730 Device Reliability: OK 00:11:36.730 Read Only: No 00:11:36.730 Volatile Memory Backup: OK 00:11:36.730 Current Temperature: 323 Kelvin (50 Celsius) 00:11:36.730 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:36.730 Available Spare: 0% 00:11:36.730 Available Spare Threshold: 0% 00:11:36.730 Life Percentage Used: 0% 00:11:36.730 Data Units Read: 776 00:11:36.730 Data Units Written: 705 00:11:36.730 Host Read Commands: 35602 00:11:36.730 Host Write Commands: 35024 00:11:36.730 Controller Busy Time: 0 minutes 00:11:36.730 Power Cycles: 0 00:11:36.730 Power On Hours: 0 hours 00:11:36.730 Unsafe Shutdowns: 0 00:11:36.730 Unrecoverable Media Errors: 0 00:11:36.730 Lifetime Error Log Entries: 0 00:11:36.730 Warning Temperature Time: 0 minutes 00:11:36.730 Critical Temperature Time: 0 minutes 00:11:36.730 00:11:36.730 Number of Queues 00:11:36.730 ================ 00:11:36.730 Number of I/O Submission Queues: 64 00:11:36.730 Number of I/O Completion Queues: 64 00:11:36.730 00:11:36.730 ZNS Specific Controller Data 00:11:36.730 ============================ 00:11:36.730 Zone Append Size Limit: 0 00:11:36.730 00:11:36.730 00:11:36.730 Active Namespaces 00:11:36.730 ================= 00:11:36.730 Namespace ID:1 00:11:36.730 Error Recovery Timeout: Unlimited 00:11:36.730 Command Set Identifier: NVM (00h) 00:11:36.730 Deallocate: Supported 00:11:36.730 Deallocated/Unwritten Error: Supported 00:11:36.730 Deallocated Read Value: All 0x00 00:11:36.730 Deallocate in Write Zeroes: Not Supported 00:11:36.730 Deallocated Guard Field: 0xFFFF 00:11:36.730 Flush: Supported 00:11:36.730 Reservation: Not Supported 00:11:36.730 Namespace Sharing Capabilities: Multiple Controllers 00:11:36.730 Size (in LBAs): 262144 (1GiB) 00:11:36.730 Capacity (in LBAs): 262144 (1GiB) 00:11:36.730 Utilization (in LBAs): 262144 (1GiB) 00:11:36.730 Thin Provisioning: Not Supported 00:11:36.730 Per-NS Atomic Units: No 00:11:36.730 Maximum Single Source Range Length: 128 00:11:36.730 Maximum Copy Length: 128 00:11:36.730 Maximum Source Range Count: 128 00:11:36.730 NGUID/EUI64 Never Reused: No 00:11:36.730 Namespace Write Protected: No 00:11:36.730 Endurance group ID: 1 00:11:36.730 Number of LBA Formats: 8 00:11:36.730 Current LBA Format: LBA Format #04 00:11:36.730 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:36.730 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:36.730 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:36.730 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:36.730 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:36.730 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:36.730 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:36.730 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:36.730 00:11:36.730 Get Feature FDP: 00:11:36.730 ================ 00:11:36.730 Enabled: Yes 00:11:36.730 FDP configuration index: 0 00:11:36.730 00:11:36.730 FDP configurations log page 00:11:36.730 =========================== 00:11:36.730 Number of FDP configurations: 1 00:11:36.730 Version: 0 00:11:36.730 Size: 112 00:11:36.730 FDP Configuration Descriptor: 0 00:11:36.730 Descriptor Size: 96 00:11:36.730 Reclaim Group Identifier format: 2 00:11:36.730 FDP Volatile Write Cache: Not Present 00:11:36.730 FDP Configuration: Valid 00:11:36.730 Vendor Specific Size: 0 00:11:36.730 Number of Reclaim Groups: 2 00:11:36.730 Number of Recalim Unit Handles: 8 00:11:36.730 Max Placement Identifiers: 128 00:11:36.730 Number of Namespaces Suppprted: 256 00:11:36.730 Reclaim unit Nominal Size: 6000000 bytes 00:11:36.730 Estimated Reclaim Unit Time Limit: Not Reported 00:11:36.730 RUH Desc #000: RUH Type: Initially Isolated 00:11:36.730 RUH Desc #001: RUH Type: Initially Isolated 00:11:36.730 RUH Desc #002: RUH Type: Initially Isolated 00:11:36.730 RUH Desc #003: RUH Type: Initially Isolated 00:11:36.730 RUH Desc #004: RUH Type: Initially Isolated 00:11:36.730 RUH Desc #005: RUH Type: Initially Isolated 00:11:36.730 RUH Desc #006: RUH Type: Initially Isolated 00:11:36.730 RUH Desc #007: RUH Type: Initially Isolated 00:11:36.730 00:11:36.730 FDP reclaim unit handle usage log page 00:11:36.730 ====================================== 00:11:36.730 Number of Reclaim Unit Handles: 8 00:11:36.730 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:36.730 RUH Usage Desc #001: RUH Attributes: Unused 00:11:36.730 RUH Usage Desc #002: RUH Attributes: Unused 00:11:36.730 RUH Usage Desc #003: RUH Attributes: Unused 00:11:36.730 RUH Usage Desc #004: RUH Attributes: Unused 00:11:36.730 RUH Usage Desc #005: RUH Attributes: Unused 00:11:36.730 RUH Usage Desc #006: RUH Attributes: Unused 00:11:36.730 RUH Usage Desc #007: RUH Attributes: Unused 00:11:36.730 00:11:36.730 FDP statistics log page 00:11:36.730 ======================= 00:11:36.730 Host bytes with metadata written: 425304064 00:11:36.730 Media bytes with metadata written: 425349120 00:11:36.730 Media bytes erased: 0 00:11:36.730 00:11:36.730 FDP events log page 00:11:36.730 =================== 00:11:36.730 Number of FDP events: 0 00:11:36.730 00:11:36.730 NVM Specific Namespace Data 00:11:36.730 =========================== 00:11:36.730 Logical Block Storage Tag Mask: 0 00:11:36.730 Protection Information Capabilities: 00:11:36.730 16b Guard Protection Information Storage Tag Support: No 00:11:36.730 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:36.730 Storage Tag Check Read Support: No 00:11:36.730 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:36.730 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:36.730 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:36.730 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:36.730 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:36.730 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:36.730 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:36.730 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:36.730 00:11:36.730 real 0m1.213s 00:11:36.730 user 0m0.447s 00:11:36.730 sys 0m0.548s 00:11:36.730 06:37:49 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:36.730 06:37:49 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:11:36.730 ************************************ 00:11:36.730 END TEST nvme_identify 00:11:36.730 ************************************ 00:11:36.730 06:37:49 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:11:36.730 06:37:49 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:36.730 06:37:49 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:36.730 06:37:49 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:36.730 ************************************ 00:11:36.730 START TEST nvme_perf 00:11:36.730 ************************************ 00:11:36.730 06:37:49 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:11:36.730 06:37:49 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:11:38.119 Initializing NVMe Controllers 00:11:38.119 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:38.119 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:38.119 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:38.119 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:38.119 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:11:38.119 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:11:38.119 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:11:38.119 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:11:38.119 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:11:38.119 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:11:38.119 Initialization complete. Launching workers. 00:11:38.119 ======================================================== 00:11:38.119 Latency(us) 00:11:38.119 Device Information : IOPS MiB/s Average min max 00:11:38.119 PCIE (0000:00:10.0) NSID 1 from core 0: 13355.98 156.52 9599.77 5631.81 33893.31 00:11:38.119 PCIE (0000:00:11.0) NSID 1 from core 0: 13355.98 156.52 9587.10 5708.45 32544.45 00:11:38.119 PCIE (0000:00:13.0) NSID 1 from core 0: 13355.98 156.52 9572.79 5711.12 32098.13 00:11:38.119 PCIE (0000:00:12.0) NSID 1 from core 0: 13355.98 156.52 9558.12 5733.46 30860.31 00:11:38.119 PCIE (0000:00:12.0) NSID 2 from core 0: 13355.98 156.52 9543.41 5740.19 29653.25 00:11:38.119 PCIE (0000:00:12.0) NSID 3 from core 0: 13355.98 156.52 9528.74 5675.39 28222.22 00:11:38.119 ======================================================== 00:11:38.119 Total : 80135.88 939.09 9564.99 5631.81 33893.31 00:11:38.119 00:11:38.119 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:11:38.119 ================================================================================= 00:11:38.119 1.00000% : 5948.652us 00:11:38.119 10.00000% : 6553.600us 00:11:38.119 25.00000% : 7108.135us 00:11:38.119 50.00000% : 9779.988us 00:11:38.119 75.00000% : 11090.708us 00:11:38.119 90.00000% : 12653.489us 00:11:38.119 95.00000% : 13611.323us 00:11:38.119 98.00000% : 16837.711us 00:11:38.119 99.00000% : 17946.782us 00:11:38.119 99.50000% : 26214.400us 00:11:38.119 99.90000% : 33675.422us 00:11:38.119 99.99000% : 33877.071us 00:11:38.119 99.99900% : 34078.720us 00:11:38.119 99.99990% : 34078.720us 00:11:38.119 99.99999% : 34078.720us 00:11:38.119 00:11:38.119 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:11:38.119 ================================================================================= 00:11:38.119 1.00000% : 5973.858us 00:11:38.119 10.00000% : 6553.600us 00:11:38.119 25.00000% : 7057.723us 00:11:38.119 50.00000% : 9779.988us 00:11:38.119 75.00000% : 11141.120us 00:11:38.119 90.00000% : 12653.489us 00:11:38.119 95.00000% : 13409.674us 00:11:38.119 98.00000% : 16837.711us 00:11:38.119 99.00000% : 17946.782us 00:11:38.119 99.50000% : 26012.751us 00:11:38.119 99.90000% : 32263.877us 00:11:38.119 99.99000% : 32667.175us 00:11:38.119 99.99900% : 32667.175us 00:11:38.119 99.99990% : 32667.175us 00:11:38.119 99.99999% : 32667.175us 00:11:38.119 00:11:38.119 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:11:38.119 ================================================================================= 00:11:38.119 1.00000% : 6024.271us 00:11:38.119 10.00000% : 6553.600us 00:11:38.119 25.00000% : 7057.723us 00:11:38.119 50.00000% : 9779.988us 00:11:38.119 75.00000% : 11090.708us 00:11:38.119 90.00000% : 12603.077us 00:11:38.119 95.00000% : 13510.498us 00:11:38.119 98.00000% : 16131.938us 00:11:38.119 99.00000% : 18249.255us 00:11:38.119 99.50000% : 24500.382us 00:11:38.119 99.90000% : 31860.578us 00:11:38.119 99.99000% : 32263.877us 00:11:38.119 99.99900% : 32263.877us 00:11:38.119 99.99990% : 32263.877us 00:11:38.119 99.99999% : 32263.877us 00:11:38.119 00:11:38.119 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:11:38.119 ================================================================================= 00:11:38.119 1.00000% : 6024.271us 00:11:38.119 10.00000% : 6553.600us 00:11:38.119 25.00000% : 7057.723us 00:11:38.119 50.00000% : 9729.575us 00:11:38.119 75.00000% : 11040.295us 00:11:38.119 90.00000% : 12603.077us 00:11:38.119 95.00000% : 13611.323us 00:11:38.119 98.00000% : 16837.711us 00:11:38.119 99.00000% : 17845.957us 00:11:38.119 99.50000% : 22988.012us 00:11:38.119 99.90000% : 30650.683us 00:11:38.119 99.99000% : 30852.332us 00:11:38.119 99.99900% : 31053.982us 00:11:38.119 99.99990% : 31053.982us 00:11:38.119 99.99999% : 31053.982us 00:11:38.119 00:11:38.119 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:11:38.119 ================================================================================= 00:11:38.119 1.00000% : 6049.477us 00:11:38.119 10.00000% : 6604.012us 00:11:38.119 25.00000% : 7057.723us 00:11:38.119 50.00000% : 9729.575us 00:11:38.119 75.00000% : 11090.708us 00:11:38.119 90.00000% : 12552.665us 00:11:38.119 95.00000% : 13812.972us 00:11:38.119 98.00000% : 16636.062us 00:11:38.119 99.00000% : 17845.957us 00:11:38.119 99.50000% : 21475.643us 00:11:38.119 99.90000% : 29440.788us 00:11:38.119 99.99000% : 29642.437us 00:11:38.119 99.99900% : 29844.086us 00:11:38.119 99.99990% : 29844.086us 00:11:38.119 99.99999% : 29844.086us 00:11:38.119 00:11:38.119 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:11:38.119 ================================================================================= 00:11:38.119 1.00000% : 6049.477us 00:11:38.119 10.00000% : 6604.012us 00:11:38.119 25.00000% : 7057.723us 00:11:38.119 50.00000% : 9729.575us 00:11:38.119 75.00000% : 11090.708us 00:11:38.119 90.00000% : 12502.252us 00:11:38.119 95.00000% : 13712.148us 00:11:38.119 98.00000% : 16837.711us 00:11:38.119 99.00000% : 17946.782us 00:11:38.119 99.50000% : 19963.274us 00:11:38.119 99.90000% : 28029.243us 00:11:38.119 99.99000% : 28230.892us 00:11:38.119 99.99900% : 28230.892us 00:11:38.119 99.99990% : 28230.892us 00:11:38.119 99.99999% : 28230.892us 00:11:38.119 00:11:38.119 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:11:38.119 ============================================================================== 00:11:38.119 Range in us Cumulative IO count 00:11:38.119 5620.972 - 5646.178: 0.0150% ( 2) 00:11:38.119 5646.178 - 5671.385: 0.0374% ( 3) 00:11:38.119 5671.385 - 5696.591: 0.0748% ( 5) 00:11:38.119 5696.591 - 5721.797: 0.1047% ( 4) 00:11:38.119 5721.797 - 5747.003: 0.1719% ( 9) 00:11:38.119 5747.003 - 5772.209: 0.2318% ( 8) 00:11:38.119 5772.209 - 5797.415: 0.3140% ( 11) 00:11:38.119 5797.415 - 5822.622: 0.4187% ( 14) 00:11:38.119 5822.622 - 5847.828: 0.5233% ( 14) 00:11:38.119 5847.828 - 5873.034: 0.6205% ( 13) 00:11:38.119 5873.034 - 5898.240: 0.7999% ( 24) 00:11:38.119 5898.240 - 5923.446: 0.9420% ( 19) 00:11:38.119 5923.446 - 5948.652: 1.0915% ( 20) 00:11:38.119 5948.652 - 5973.858: 1.1962% ( 14) 00:11:38.119 5973.858 - 5999.065: 1.3532% ( 21) 00:11:38.119 5999.065 - 6024.271: 1.5550% ( 27) 00:11:38.119 6024.271 - 6049.477: 1.7494% ( 26) 00:11:38.119 6049.477 - 6074.683: 2.0335% ( 38) 00:11:38.119 6074.683 - 6099.889: 2.2802% ( 33) 00:11:38.119 6099.889 - 6125.095: 2.5643% ( 38) 00:11:38.119 6125.095 - 6150.302: 2.9306% ( 49) 00:11:38.119 6150.302 - 6175.508: 3.3343% ( 54) 00:11:38.119 6175.508 - 6200.714: 3.6109% ( 37) 00:11:38.119 6200.714 - 6225.920: 3.9997% ( 52) 00:11:38.119 6225.920 - 6251.126: 4.3810% ( 51) 00:11:38.119 6251.126 - 6276.332: 4.8221% ( 59) 00:11:38.119 6276.332 - 6301.538: 5.1884% ( 49) 00:11:38.119 6301.538 - 6326.745: 5.7416% ( 74) 00:11:38.119 6326.745 - 6351.951: 6.1752% ( 58) 00:11:38.119 6351.951 - 6377.157: 6.7210% ( 73) 00:11:38.119 6377.157 - 6402.363: 7.2219% ( 67) 00:11:38.119 6402.363 - 6427.569: 7.7751% ( 74) 00:11:38.119 6427.569 - 6452.775: 8.2760% ( 67) 00:11:38.119 6452.775 - 6503.188: 9.4273% ( 154) 00:11:38.119 6503.188 - 6553.600: 10.7356% ( 175) 00:11:38.119 6553.600 - 6604.012: 11.9916% ( 168) 00:11:38.119 6604.012 - 6654.425: 13.4644% ( 197) 00:11:38.119 6654.425 - 6704.837: 14.8251% ( 182) 00:11:38.119 6704.837 - 6755.249: 16.1633% ( 179) 00:11:38.119 6755.249 - 6805.662: 17.5763% ( 189) 00:11:38.119 6805.662 - 6856.074: 19.0341% ( 195) 00:11:38.119 6856.074 - 6906.486: 20.4471% ( 189) 00:11:38.119 6906.486 - 6956.898: 21.8825% ( 192) 00:11:38.119 6956.898 - 7007.311: 23.3627% ( 198) 00:11:38.119 7007.311 - 7057.723: 24.7010% ( 179) 00:11:38.119 7057.723 - 7108.135: 26.0467% ( 180) 00:11:38.119 7108.135 - 7158.548: 27.3325% ( 172) 00:11:38.119 7158.548 - 7208.960: 28.4839% ( 154) 00:11:38.119 7208.960 - 7259.372: 29.6576% ( 157) 00:11:38.119 7259.372 - 7309.785: 30.6370% ( 131) 00:11:38.119 7309.785 - 7360.197: 31.5565% ( 123) 00:11:38.119 7360.197 - 7410.609: 32.4237% ( 116) 00:11:38.119 7410.609 - 7461.022: 33.2835% ( 115) 00:11:38.119 7461.022 - 7511.434: 34.0535% ( 103) 00:11:38.120 7511.434 - 7561.846: 34.6965% ( 86) 00:11:38.120 7561.846 - 7612.258: 35.4217% ( 97) 00:11:38.120 7612.258 - 7662.671: 35.9749% ( 74) 00:11:38.120 7662.671 - 7713.083: 36.5356% ( 75) 00:11:38.120 7713.083 - 7763.495: 36.9617% ( 57) 00:11:38.120 7763.495 - 7813.908: 37.4252% ( 62) 00:11:38.120 7813.908 - 7864.320: 37.8140% ( 52) 00:11:38.120 7864.320 - 7914.732: 38.2551% ( 59) 00:11:38.120 7914.732 - 7965.145: 38.5915% ( 45) 00:11:38.120 7965.145 - 8015.557: 39.0176% ( 57) 00:11:38.120 8015.557 - 8065.969: 39.3541% ( 45) 00:11:38.120 8065.969 - 8116.382: 39.6830% ( 44) 00:11:38.120 8116.382 - 8166.794: 39.9522% ( 36) 00:11:38.120 8166.794 - 8217.206: 40.2661% ( 42) 00:11:38.120 8217.206 - 8267.618: 40.5801% ( 42) 00:11:38.120 8267.618 - 8318.031: 40.8941% ( 42) 00:11:38.120 8318.031 - 8368.443: 41.1408% ( 33) 00:11:38.120 8368.443 - 8418.855: 41.4623% ( 43) 00:11:38.120 8418.855 - 8469.268: 41.6717% ( 28) 00:11:38.120 8469.268 - 8519.680: 41.9258% ( 34) 00:11:38.120 8519.680 - 8570.092: 42.1651% ( 32) 00:11:38.120 8570.092 - 8620.505: 42.4492% ( 38) 00:11:38.120 8620.505 - 8670.917: 42.7333% ( 38) 00:11:38.120 8670.917 - 8721.329: 43.0547% ( 43) 00:11:38.120 8721.329 - 8771.742: 43.3538% ( 40) 00:11:38.120 8771.742 - 8822.154: 43.6154% ( 35) 00:11:38.120 8822.154 - 8872.566: 43.8248% ( 28) 00:11:38.120 8872.566 - 8922.978: 44.1537% ( 44) 00:11:38.120 8922.978 - 8973.391: 44.4079% ( 34) 00:11:38.120 8973.391 - 9023.803: 44.6995% ( 39) 00:11:38.120 9023.803 - 9074.215: 44.9088% ( 28) 00:11:38.120 9074.215 - 9124.628: 45.2004% ( 39) 00:11:38.120 9124.628 - 9175.040: 45.4919% ( 39) 00:11:38.120 9175.040 - 9225.452: 45.7835% ( 39) 00:11:38.120 9225.452 - 9275.865: 46.0676% ( 38) 00:11:38.120 9275.865 - 9326.277: 46.3068% ( 32) 00:11:38.120 9326.277 - 9376.689: 46.6208% ( 42) 00:11:38.120 9376.689 - 9427.102: 46.9797% ( 48) 00:11:38.120 9427.102 - 9477.514: 47.3535% ( 50) 00:11:38.120 9477.514 - 9527.926: 47.7123% ( 48) 00:11:38.120 9527.926 - 9578.338: 48.2880% ( 77) 00:11:38.120 9578.338 - 9628.751: 48.7066% ( 56) 00:11:38.120 9628.751 - 9679.163: 49.3122% ( 81) 00:11:38.120 9679.163 - 9729.575: 49.8654% ( 74) 00:11:38.120 9729.575 - 9779.988: 50.4710% ( 81) 00:11:38.120 9779.988 - 9830.400: 51.0915% ( 83) 00:11:38.120 9830.400 - 9880.812: 51.7270% ( 85) 00:11:38.120 9880.812 - 9931.225: 52.4671% ( 99) 00:11:38.120 9931.225 - 9981.637: 53.1998% ( 98) 00:11:38.120 9981.637 - 10032.049: 54.2165% ( 136) 00:11:38.120 10032.049 - 10082.462: 55.2033% ( 132) 00:11:38.120 10082.462 - 10132.874: 56.0706% ( 116) 00:11:38.120 10132.874 - 10183.286: 57.1023% ( 138) 00:11:38.120 10183.286 - 10233.698: 58.0816% ( 131) 00:11:38.120 10233.698 - 10284.111: 59.0535% ( 130) 00:11:38.120 10284.111 - 10334.523: 60.2796% ( 164) 00:11:38.120 10334.523 - 10384.935: 61.3636% ( 145) 00:11:38.120 10384.935 - 10435.348: 62.4925% ( 151) 00:11:38.120 10435.348 - 10485.760: 63.6065% ( 149) 00:11:38.120 10485.760 - 10536.172: 64.7952% ( 159) 00:11:38.120 10536.172 - 10586.585: 65.8343% ( 139) 00:11:38.120 10586.585 - 10636.997: 66.9258% ( 146) 00:11:38.120 10636.997 - 10687.409: 68.0248% ( 147) 00:11:38.120 10687.409 - 10737.822: 68.9294% ( 121) 00:11:38.120 10737.822 - 10788.234: 69.8789% ( 127) 00:11:38.120 10788.234 - 10838.646: 70.8433% ( 129) 00:11:38.120 10838.646 - 10889.058: 71.7479% ( 121) 00:11:38.120 10889.058 - 10939.471: 72.5703% ( 110) 00:11:38.120 10939.471 - 10989.883: 73.5646% ( 133) 00:11:38.120 10989.883 - 11040.295: 74.3870% ( 110) 00:11:38.120 11040.295 - 11090.708: 75.1719% ( 105) 00:11:38.120 11090.708 - 11141.120: 75.9121% ( 99) 00:11:38.120 11141.120 - 11191.532: 76.7195% ( 108) 00:11:38.120 11191.532 - 11241.945: 77.4073% ( 92) 00:11:38.120 11241.945 - 11292.357: 77.9755% ( 76) 00:11:38.120 11292.357 - 11342.769: 78.5138% ( 72) 00:11:38.120 11342.769 - 11393.182: 79.1791% ( 89) 00:11:38.120 11393.182 - 11443.594: 79.6800% ( 67) 00:11:38.120 11443.594 - 11494.006: 80.2407% ( 75) 00:11:38.120 11494.006 - 11544.418: 80.7491% ( 68) 00:11:38.120 11544.418 - 11594.831: 81.3920% ( 86) 00:11:38.120 11594.831 - 11645.243: 81.8331% ( 59) 00:11:38.120 11645.243 - 11695.655: 82.4237% ( 79) 00:11:38.120 11695.655 - 11746.068: 82.9321% ( 68) 00:11:38.120 11746.068 - 11796.480: 83.4255% ( 66) 00:11:38.120 11796.480 - 11846.892: 83.9339% ( 68) 00:11:38.120 11846.892 - 11897.305: 84.5395% ( 81) 00:11:38.120 11897.305 - 11947.717: 85.0852% ( 73) 00:11:38.120 11947.717 - 11998.129: 85.5114% ( 57) 00:11:38.120 11998.129 - 12048.542: 85.9450% ( 58) 00:11:38.120 12048.542 - 12098.954: 86.3711% ( 57) 00:11:38.120 12098.954 - 12149.366: 86.7150% ( 46) 00:11:38.120 12149.366 - 12199.778: 87.0664% ( 47) 00:11:38.120 12199.778 - 12250.191: 87.5299% ( 62) 00:11:38.120 12250.191 - 12300.603: 87.8439% ( 42) 00:11:38.120 12300.603 - 12351.015: 88.1654% ( 43) 00:11:38.120 12351.015 - 12401.428: 88.4569% ( 39) 00:11:38.120 12401.428 - 12451.840: 88.7709% ( 42) 00:11:38.120 12451.840 - 12502.252: 89.1373% ( 49) 00:11:38.120 12502.252 - 12552.665: 89.4812% ( 46) 00:11:38.120 12552.665 - 12603.077: 89.7727% ( 39) 00:11:38.120 12603.077 - 12653.489: 90.1465% ( 50) 00:11:38.120 12653.489 - 12703.902: 90.4157% ( 36) 00:11:38.120 12703.902 - 12754.314: 90.7147% ( 40) 00:11:38.120 12754.314 - 12804.726: 90.9764% ( 35) 00:11:38.120 12804.726 - 12855.138: 91.3053% ( 44) 00:11:38.120 12855.138 - 12905.551: 91.5595% ( 34) 00:11:38.120 12905.551 - 13006.375: 92.1426% ( 78) 00:11:38.120 13006.375 - 13107.200: 92.7632% ( 83) 00:11:38.120 13107.200 - 13208.025: 93.3388% ( 77) 00:11:38.120 13208.025 - 13308.849: 93.8547% ( 69) 00:11:38.120 13308.849 - 13409.674: 94.3107% ( 61) 00:11:38.120 13409.674 - 13510.498: 94.7368% ( 57) 00:11:38.120 13510.498 - 13611.323: 95.0882% ( 47) 00:11:38.120 13611.323 - 13712.148: 95.3574% ( 36) 00:11:38.120 13712.148 - 13812.972: 95.6190% ( 35) 00:11:38.120 13812.972 - 13913.797: 95.7685% ( 20) 00:11:38.120 13913.797 - 14014.622: 95.8807% ( 15) 00:11:38.120 14014.622 - 14115.446: 95.9928% ( 15) 00:11:38.120 14115.446 - 14216.271: 96.0975% ( 14) 00:11:38.120 14216.271 - 14317.095: 96.1423% ( 6) 00:11:38.120 14317.095 - 14417.920: 96.1722% ( 4) 00:11:38.120 14417.920 - 14518.745: 96.2694% ( 13) 00:11:38.120 14518.745 - 14619.569: 96.3367% ( 9) 00:11:38.120 14619.569 - 14720.394: 96.4190% ( 11) 00:11:38.120 14720.394 - 14821.218: 96.4937% ( 10) 00:11:38.120 14821.218 - 14922.043: 96.5834% ( 12) 00:11:38.120 14922.043 - 15022.868: 96.6657% ( 11) 00:11:38.120 15022.868 - 15123.692: 96.7255% ( 8) 00:11:38.120 15123.692 - 15224.517: 96.8227% ( 13) 00:11:38.120 15224.517 - 15325.342: 96.9273% ( 14) 00:11:38.120 15325.342 - 15426.166: 97.0469% ( 16) 00:11:38.120 15426.166 - 15526.991: 97.1441% ( 13) 00:11:38.120 15526.991 - 15627.815: 97.2787% ( 18) 00:11:38.120 15627.815 - 15728.640: 97.3385% ( 8) 00:11:38.120 15728.640 - 15829.465: 97.3759% ( 5) 00:11:38.120 15829.465 - 15930.289: 97.4133% ( 5) 00:11:38.120 15930.289 - 16031.114: 97.4581% ( 6) 00:11:38.120 16031.114 - 16131.938: 97.4880% ( 4) 00:11:38.120 16131.938 - 16232.763: 97.5478% ( 8) 00:11:38.120 16232.763 - 16333.588: 97.5778% ( 4) 00:11:38.120 16333.588 - 16434.412: 97.6749% ( 13) 00:11:38.120 16434.412 - 16535.237: 97.7572% ( 11) 00:11:38.120 16535.237 - 16636.062: 97.8394% ( 11) 00:11:38.120 16636.062 - 16736.886: 97.9291% ( 12) 00:11:38.120 16736.886 - 16837.711: 98.0413% ( 15) 00:11:38.120 16837.711 - 16938.535: 98.1684% ( 17) 00:11:38.120 16938.535 - 17039.360: 98.2805% ( 15) 00:11:38.120 17039.360 - 17140.185: 98.3702% ( 12) 00:11:38.120 17140.185 - 17241.009: 98.4749% ( 14) 00:11:38.120 17241.009 - 17341.834: 98.6543% ( 24) 00:11:38.120 17341.834 - 17442.658: 98.7590% ( 14) 00:11:38.120 17442.658 - 17543.483: 98.8562% ( 13) 00:11:38.120 17543.483 - 17644.308: 98.9234% ( 9) 00:11:38.120 17644.308 - 17745.132: 98.9459% ( 3) 00:11:38.120 17745.132 - 17845.957: 98.9758% ( 4) 00:11:38.120 17845.957 - 17946.782: 99.0057% ( 4) 00:11:38.120 17946.782 - 18047.606: 99.0431% ( 5) 00:11:38.120 24399.557 - 24500.382: 99.0580% ( 2) 00:11:38.120 24500.382 - 24601.206: 99.0879% ( 4) 00:11:38.120 24601.206 - 24702.031: 99.1103% ( 3) 00:11:38.120 24702.031 - 24802.855: 99.1328% ( 3) 00:11:38.120 24802.855 - 24903.680: 99.1627% ( 4) 00:11:38.120 24903.680 - 25004.505: 99.1926% ( 4) 00:11:38.120 25004.505 - 25105.329: 99.2150% ( 3) 00:11:38.120 25105.329 - 25206.154: 99.2449% ( 4) 00:11:38.120 25206.154 - 25306.978: 99.2748% ( 4) 00:11:38.120 25306.978 - 25407.803: 99.3047% ( 4) 00:11:38.120 25407.803 - 25508.628: 99.3346% ( 4) 00:11:38.120 25508.628 - 25609.452: 99.3645% ( 4) 00:11:38.120 25609.452 - 25710.277: 99.3944% ( 4) 00:11:38.120 25710.277 - 25811.102: 99.4094% ( 2) 00:11:38.120 25811.102 - 26012.751: 99.4617% ( 7) 00:11:38.120 26012.751 - 26214.400: 99.5215% ( 8) 00:11:38.120 31860.578 - 32062.228: 99.5440% ( 3) 00:11:38.120 32062.228 - 32263.877: 99.5963% ( 7) 00:11:38.120 32263.877 - 32465.526: 99.6486% ( 7) 00:11:38.120 32465.526 - 32667.175: 99.7010% ( 7) 00:11:38.120 32667.175 - 32868.825: 99.7458% ( 6) 00:11:38.120 32868.825 - 33070.474: 99.7981% ( 7) 00:11:38.120 33070.474 - 33272.123: 99.8430% ( 6) 00:11:38.120 33272.123 - 33473.772: 99.8953% ( 7) 00:11:38.120 33473.772 - 33675.422: 99.9477% ( 7) 00:11:38.120 33675.422 - 33877.071: 99.9925% ( 6) 00:11:38.120 33877.071 - 34078.720: 100.0000% ( 1) 00:11:38.120 00:11:38.120 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:11:38.120 ============================================================================== 00:11:38.121 Range in us Cumulative IO count 00:11:38.121 5696.591 - 5721.797: 0.0150% ( 2) 00:11:38.121 5721.797 - 5747.003: 0.0224% ( 1) 00:11:38.121 5747.003 - 5772.209: 0.0748% ( 7) 00:11:38.121 5772.209 - 5797.415: 0.1196% ( 6) 00:11:38.121 5797.415 - 5822.622: 0.1794% ( 8) 00:11:38.121 5822.622 - 5847.828: 0.2916% ( 15) 00:11:38.121 5847.828 - 5873.034: 0.4411% ( 20) 00:11:38.121 5873.034 - 5898.240: 0.5532% ( 15) 00:11:38.121 5898.240 - 5923.446: 0.7102% ( 21) 00:11:38.121 5923.446 - 5948.652: 0.8597% ( 20) 00:11:38.121 5948.652 - 5973.858: 1.0018% ( 19) 00:11:38.121 5973.858 - 5999.065: 1.1812% ( 24) 00:11:38.121 5999.065 - 6024.271: 1.3457% ( 22) 00:11:38.121 6024.271 - 6049.477: 1.5550% ( 28) 00:11:38.121 6049.477 - 6074.683: 1.7344% ( 24) 00:11:38.121 6074.683 - 6099.889: 1.9214% ( 25) 00:11:38.121 6099.889 - 6125.095: 2.1232% ( 27) 00:11:38.121 6125.095 - 6150.302: 2.3624% ( 32) 00:11:38.121 6150.302 - 6175.508: 2.6391% ( 37) 00:11:38.121 6175.508 - 6200.714: 2.9830% ( 46) 00:11:38.121 6200.714 - 6225.920: 3.3717% ( 52) 00:11:38.121 6225.920 - 6251.126: 3.7231% ( 47) 00:11:38.121 6251.126 - 6276.332: 4.0819% ( 48) 00:11:38.121 6276.332 - 6301.538: 4.5604% ( 64) 00:11:38.121 6301.538 - 6326.745: 5.1211% ( 75) 00:11:38.121 6326.745 - 6351.951: 5.6071% ( 65) 00:11:38.121 6351.951 - 6377.157: 6.2051% ( 80) 00:11:38.121 6377.157 - 6402.363: 6.7733% ( 76) 00:11:38.121 6402.363 - 6427.569: 7.3340% ( 75) 00:11:38.121 6427.569 - 6452.775: 7.8947% ( 75) 00:11:38.121 6452.775 - 6503.188: 9.1731% ( 171) 00:11:38.121 6503.188 - 6553.600: 10.4366% ( 169) 00:11:38.121 6553.600 - 6604.012: 11.8795% ( 193) 00:11:38.121 6604.012 - 6654.425: 13.2551% ( 184) 00:11:38.121 6654.425 - 6704.837: 14.6382% ( 185) 00:11:38.121 6704.837 - 6755.249: 16.0661% ( 191) 00:11:38.121 6755.249 - 6805.662: 17.5613% ( 200) 00:11:38.121 6805.662 - 6856.074: 19.0191% ( 195) 00:11:38.121 6856.074 - 6906.486: 20.7237% ( 228) 00:11:38.121 6906.486 - 6956.898: 22.2638% ( 206) 00:11:38.121 6956.898 - 7007.311: 23.8188% ( 208) 00:11:38.121 7007.311 - 7057.723: 25.3140% ( 200) 00:11:38.121 7057.723 - 7108.135: 26.6672% ( 181) 00:11:38.121 7108.135 - 7158.548: 27.9531% ( 172) 00:11:38.121 7158.548 - 7208.960: 29.1193% ( 156) 00:11:38.121 7208.960 - 7259.372: 30.1286% ( 135) 00:11:38.121 7259.372 - 7309.785: 31.0855% ( 128) 00:11:38.121 7309.785 - 7360.197: 31.9453% ( 115) 00:11:38.121 7360.197 - 7410.609: 32.8275% ( 118) 00:11:38.121 7410.609 - 7461.022: 33.7545% ( 124) 00:11:38.121 7461.022 - 7511.434: 34.5096% ( 101) 00:11:38.121 7511.434 - 7561.846: 35.1899% ( 91) 00:11:38.121 7561.846 - 7612.258: 35.8328% ( 86) 00:11:38.121 7612.258 - 7662.671: 36.4459% ( 82) 00:11:38.121 7662.671 - 7713.083: 36.9542% ( 68) 00:11:38.121 7713.083 - 7763.495: 37.3355% ( 51) 00:11:38.121 7763.495 - 7813.908: 37.6346% ( 40) 00:11:38.121 7813.908 - 7864.320: 37.9261% ( 39) 00:11:38.121 7864.320 - 7914.732: 38.2252% ( 40) 00:11:38.121 7914.732 - 7965.145: 38.4644% ( 32) 00:11:38.121 7965.145 - 8015.557: 38.7111% ( 33) 00:11:38.121 8015.557 - 8065.969: 38.9803% ( 36) 00:11:38.121 8065.969 - 8116.382: 39.2344% ( 34) 00:11:38.121 8116.382 - 8166.794: 39.5260% ( 39) 00:11:38.121 8166.794 - 8217.206: 39.8400% ( 42) 00:11:38.121 8217.206 - 8267.618: 40.2362% ( 53) 00:11:38.121 8267.618 - 8318.031: 40.5876% ( 47) 00:11:38.121 8318.031 - 8368.443: 40.9764% ( 52) 00:11:38.121 8368.443 - 8418.855: 41.2679% ( 39) 00:11:38.121 8418.855 - 8469.268: 41.5595% ( 39) 00:11:38.121 8469.268 - 8519.680: 41.8436% ( 38) 00:11:38.121 8519.680 - 8570.092: 42.0978% ( 34) 00:11:38.121 8570.092 - 8620.505: 42.3594% ( 35) 00:11:38.121 8620.505 - 8670.917: 42.6211% ( 35) 00:11:38.121 8670.917 - 8721.329: 42.8977% ( 37) 00:11:38.121 8721.329 - 8771.742: 43.2042% ( 41) 00:11:38.121 8771.742 - 8822.154: 43.5407% ( 45) 00:11:38.121 8822.154 - 8872.566: 43.8322% ( 39) 00:11:38.121 8872.566 - 8922.978: 44.1462% ( 42) 00:11:38.121 8922.978 - 8973.391: 44.4378% ( 39) 00:11:38.121 8973.391 - 9023.803: 44.7368% ( 40) 00:11:38.121 9023.803 - 9074.215: 44.9836% ( 33) 00:11:38.121 9074.215 - 9124.628: 45.2527% ( 36) 00:11:38.121 9124.628 - 9175.040: 45.4844% ( 31) 00:11:38.121 9175.040 - 9225.452: 45.8059% ( 43) 00:11:38.121 9225.452 - 9275.865: 46.0302% ( 30) 00:11:38.121 9275.865 - 9326.277: 46.2620% ( 31) 00:11:38.121 9326.277 - 9376.689: 46.5685% ( 41) 00:11:38.121 9376.689 - 9427.102: 46.9124% ( 46) 00:11:38.121 9427.102 - 9477.514: 47.2413% ( 44) 00:11:38.121 9477.514 - 9527.926: 47.5927% ( 47) 00:11:38.121 9527.926 - 9578.338: 47.9366% ( 46) 00:11:38.121 9578.338 - 9628.751: 48.3403% ( 54) 00:11:38.121 9628.751 - 9679.163: 48.7889% ( 60) 00:11:38.121 9679.163 - 9729.575: 49.3496% ( 75) 00:11:38.121 9729.575 - 9779.988: 50.0897% ( 99) 00:11:38.121 9779.988 - 9830.400: 50.7252% ( 85) 00:11:38.121 9830.400 - 9880.812: 51.3083% ( 78) 00:11:38.121 9880.812 - 9931.225: 51.9961% ( 92) 00:11:38.121 9931.225 - 9981.637: 52.7661% ( 103) 00:11:38.121 9981.637 - 10032.049: 53.5810% ( 109) 00:11:38.121 10032.049 - 10082.462: 54.5455% ( 129) 00:11:38.121 10082.462 - 10132.874: 55.5846% ( 139) 00:11:38.121 10132.874 - 10183.286: 56.7060% ( 150) 00:11:38.121 10183.286 - 10233.698: 57.9396% ( 165) 00:11:38.121 10233.698 - 10284.111: 59.1133% ( 157) 00:11:38.121 10284.111 - 10334.523: 60.2946% ( 158) 00:11:38.121 10334.523 - 10384.935: 61.5057% ( 162) 00:11:38.121 10384.935 - 10435.348: 62.7467% ( 166) 00:11:38.121 10435.348 - 10485.760: 64.0251% ( 171) 00:11:38.121 10485.760 - 10536.172: 65.1764% ( 154) 00:11:38.121 10536.172 - 10586.585: 66.3053% ( 151) 00:11:38.121 10586.585 - 10636.997: 67.2847% ( 131) 00:11:38.121 10636.997 - 10687.409: 68.2491% ( 129) 00:11:38.121 10687.409 - 10737.822: 69.1612% ( 122) 00:11:38.121 10737.822 - 10788.234: 70.1106% ( 127) 00:11:38.121 10788.234 - 10838.646: 71.0078% ( 120) 00:11:38.121 10838.646 - 10889.058: 71.9124% ( 121) 00:11:38.121 10889.058 - 10939.471: 72.7123% ( 107) 00:11:38.121 10939.471 - 10989.883: 73.4300% ( 96) 00:11:38.121 10989.883 - 11040.295: 74.1477% ( 96) 00:11:38.121 11040.295 - 11090.708: 74.8879% ( 99) 00:11:38.121 11090.708 - 11141.120: 75.6130% ( 97) 00:11:38.121 11141.120 - 11191.532: 76.3083% ( 93) 00:11:38.121 11191.532 - 11241.945: 76.9662% ( 88) 00:11:38.121 11241.945 - 11292.357: 77.6241% ( 88) 00:11:38.121 11292.357 - 11342.769: 78.3044% ( 91) 00:11:38.121 11342.769 - 11393.182: 78.9175% ( 82) 00:11:38.121 11393.182 - 11443.594: 79.5529% ( 85) 00:11:38.121 11443.594 - 11494.006: 80.1959% ( 86) 00:11:38.121 11494.006 - 11544.418: 80.8164% ( 83) 00:11:38.121 11544.418 - 11594.831: 81.3771% ( 75) 00:11:38.121 11594.831 - 11645.243: 81.9228% ( 73) 00:11:38.121 11645.243 - 11695.655: 82.5359% ( 82) 00:11:38.121 11695.655 - 11746.068: 83.1115% ( 77) 00:11:38.121 11746.068 - 11796.480: 83.6648% ( 74) 00:11:38.121 11796.480 - 11846.892: 84.1657% ( 67) 00:11:38.121 11846.892 - 11897.305: 84.6217% ( 61) 00:11:38.121 11897.305 - 11947.717: 85.0628% ( 59) 00:11:38.121 11947.717 - 11998.129: 85.4516% ( 52) 00:11:38.121 11998.129 - 12048.542: 85.7805% ( 44) 00:11:38.121 12048.542 - 12098.954: 86.0870% ( 41) 00:11:38.121 12098.954 - 12149.366: 86.4234% ( 45) 00:11:38.121 12149.366 - 12199.778: 86.8122% ( 52) 00:11:38.121 12199.778 - 12250.191: 87.2084% ( 53) 00:11:38.121 12250.191 - 12300.603: 87.6271% ( 56) 00:11:38.121 12300.603 - 12351.015: 88.0084% ( 51) 00:11:38.121 12351.015 - 12401.428: 88.3822% ( 50) 00:11:38.121 12401.428 - 12451.840: 88.7410% ( 48) 00:11:38.121 12451.840 - 12502.252: 89.1298% ( 52) 00:11:38.121 12502.252 - 12552.665: 89.5111% ( 51) 00:11:38.121 12552.665 - 12603.077: 89.8998% ( 52) 00:11:38.121 12603.077 - 12653.489: 90.2811% ( 51) 00:11:38.121 12653.489 - 12703.902: 90.6923% ( 55) 00:11:38.121 12703.902 - 12754.314: 91.0511% ( 48) 00:11:38.121 12754.314 - 12804.726: 91.4773% ( 57) 00:11:38.121 12804.726 - 12855.138: 91.8660% ( 52) 00:11:38.121 12855.138 - 12905.551: 92.2548% ( 52) 00:11:38.121 12905.551 - 13006.375: 92.9874% ( 98) 00:11:38.121 13006.375 - 13107.200: 93.6005% ( 82) 00:11:38.121 13107.200 - 13208.025: 94.1687% ( 76) 00:11:38.121 13208.025 - 13308.849: 94.6770% ( 68) 00:11:38.121 13308.849 - 13409.674: 95.1181% ( 59) 00:11:38.121 13409.674 - 13510.498: 95.4770% ( 48) 00:11:38.121 13510.498 - 13611.323: 95.7386% ( 35) 00:11:38.121 13611.323 - 13712.148: 95.8807% ( 19) 00:11:38.121 13712.148 - 13812.972: 95.9704% ( 12) 00:11:38.121 13812.972 - 13913.797: 96.0452% ( 10) 00:11:38.121 13913.797 - 14014.622: 96.2022% ( 21) 00:11:38.121 14014.622 - 14115.446: 96.2769% ( 10) 00:11:38.121 14115.446 - 14216.271: 96.3218% ( 6) 00:11:38.121 14216.271 - 14317.095: 96.3592% ( 5) 00:11:38.121 14317.095 - 14417.920: 96.4040% ( 6) 00:11:38.121 14417.920 - 14518.745: 96.4489% ( 6) 00:11:38.121 14518.745 - 14619.569: 96.4937% ( 6) 00:11:38.121 14619.569 - 14720.394: 96.5984% ( 14) 00:11:38.121 14720.394 - 14821.218: 96.6881% ( 12) 00:11:38.121 14821.218 - 14922.043: 96.7853% ( 13) 00:11:38.121 14922.043 - 15022.868: 96.8675% ( 11) 00:11:38.121 15022.868 - 15123.692: 96.9124% ( 6) 00:11:38.121 15123.692 - 15224.517: 96.9572% ( 6) 00:11:38.121 15224.517 - 15325.342: 97.0096% ( 7) 00:11:38.121 15325.342 - 15426.166: 97.0619% ( 7) 00:11:38.121 15426.166 - 15526.991: 97.1142% ( 7) 00:11:38.121 15526.991 - 15627.815: 97.1292% ( 2) 00:11:38.121 15829.465 - 15930.289: 97.1516% ( 3) 00:11:38.121 15930.289 - 16031.114: 97.2039% ( 7) 00:11:38.121 16031.114 - 16131.938: 97.2563% ( 7) 00:11:38.122 16131.938 - 16232.763: 97.3161% ( 8) 00:11:38.122 16232.763 - 16333.588: 97.3983% ( 11) 00:11:38.122 16333.588 - 16434.412: 97.5254% ( 17) 00:11:38.122 16434.412 - 16535.237: 97.6675% ( 19) 00:11:38.122 16535.237 - 16636.062: 97.7946% ( 17) 00:11:38.122 16636.062 - 16736.886: 97.9366% ( 19) 00:11:38.122 16736.886 - 16837.711: 98.0487% ( 15) 00:11:38.122 16837.711 - 16938.535: 98.1385% ( 12) 00:11:38.122 16938.535 - 17039.360: 98.2805% ( 19) 00:11:38.122 17039.360 - 17140.185: 98.4225% ( 19) 00:11:38.122 17140.185 - 17241.009: 98.5646% ( 19) 00:11:38.122 17241.009 - 17341.834: 98.6767% ( 15) 00:11:38.122 17341.834 - 17442.658: 98.7739% ( 13) 00:11:38.122 17442.658 - 17543.483: 98.8412% ( 9) 00:11:38.122 17543.483 - 17644.308: 98.8786% ( 5) 00:11:38.122 17644.308 - 17745.132: 98.9234% ( 6) 00:11:38.122 17745.132 - 17845.957: 98.9683% ( 6) 00:11:38.122 17845.957 - 17946.782: 99.0132% ( 6) 00:11:38.122 17946.782 - 18047.606: 99.0431% ( 4) 00:11:38.122 23592.960 - 23693.785: 99.0580% ( 2) 00:11:38.122 23693.785 - 23794.609: 99.0804% ( 3) 00:11:38.122 23794.609 - 23895.434: 99.1029% ( 3) 00:11:38.122 23895.434 - 23996.258: 99.1178% ( 2) 00:11:38.122 23996.258 - 24097.083: 99.1403% ( 3) 00:11:38.122 24097.083 - 24197.908: 99.1552% ( 2) 00:11:38.122 24197.908 - 24298.732: 99.1702% ( 2) 00:11:38.122 24298.732 - 24399.557: 99.1851% ( 2) 00:11:38.122 24399.557 - 24500.382: 99.2075% ( 3) 00:11:38.122 24500.382 - 24601.206: 99.2225% ( 2) 00:11:38.122 24601.206 - 24702.031: 99.2449% ( 3) 00:11:38.122 24702.031 - 24802.855: 99.2599% ( 2) 00:11:38.122 24802.855 - 24903.680: 99.2748% ( 2) 00:11:38.122 24903.680 - 25004.505: 99.2972% ( 3) 00:11:38.122 25004.505 - 25105.329: 99.3122% ( 2) 00:11:38.122 25105.329 - 25206.154: 99.3421% ( 4) 00:11:38.122 25206.154 - 25306.978: 99.3571% ( 2) 00:11:38.122 25306.978 - 25407.803: 99.3870% ( 4) 00:11:38.122 25407.803 - 25508.628: 99.4094% ( 3) 00:11:38.122 25508.628 - 25609.452: 99.4393% ( 4) 00:11:38.122 25609.452 - 25710.277: 99.4617% ( 3) 00:11:38.122 25710.277 - 25811.102: 99.4842% ( 3) 00:11:38.122 25811.102 - 26012.751: 99.5215% ( 5) 00:11:38.122 30650.683 - 30852.332: 99.5514% ( 4) 00:11:38.122 30852.332 - 31053.982: 99.6038% ( 7) 00:11:38.122 31053.982 - 31255.631: 99.6561% ( 7) 00:11:38.122 31255.631 - 31457.280: 99.7084% ( 7) 00:11:38.122 31457.280 - 31658.929: 99.7608% ( 7) 00:11:38.122 31658.929 - 31860.578: 99.8131% ( 7) 00:11:38.122 31860.578 - 32062.228: 99.8729% ( 8) 00:11:38.122 32062.228 - 32263.877: 99.9252% ( 7) 00:11:38.122 32263.877 - 32465.526: 99.9776% ( 7) 00:11:38.122 32465.526 - 32667.175: 100.0000% ( 3) 00:11:38.122 00:11:38.122 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:11:38.122 ============================================================================== 00:11:38.122 Range in us Cumulative IO count 00:11:38.122 5696.591 - 5721.797: 0.0150% ( 2) 00:11:38.122 5721.797 - 5747.003: 0.0299% ( 2) 00:11:38.122 5747.003 - 5772.209: 0.0748% ( 6) 00:11:38.122 5772.209 - 5797.415: 0.1346% ( 8) 00:11:38.122 5797.415 - 5822.622: 0.2093% ( 10) 00:11:38.122 5822.622 - 5847.828: 0.2990% ( 12) 00:11:38.122 5847.828 - 5873.034: 0.3738% ( 10) 00:11:38.122 5873.034 - 5898.240: 0.4486% ( 10) 00:11:38.122 5898.240 - 5923.446: 0.5682% ( 16) 00:11:38.122 5923.446 - 5948.652: 0.6504% ( 11) 00:11:38.122 5948.652 - 5973.858: 0.7850% ( 18) 00:11:38.122 5973.858 - 5999.065: 0.8822% ( 13) 00:11:38.122 5999.065 - 6024.271: 1.0093% ( 17) 00:11:38.122 6024.271 - 6049.477: 1.1737% ( 22) 00:11:38.122 6049.477 - 6074.683: 1.3083% ( 18) 00:11:38.122 6074.683 - 6099.889: 1.4578% ( 20) 00:11:38.122 6099.889 - 6125.095: 1.6447% ( 25) 00:11:38.122 6125.095 - 6150.302: 1.8989% ( 34) 00:11:38.122 6150.302 - 6175.508: 2.1755% ( 37) 00:11:38.122 6175.508 - 6200.714: 2.4671% ( 39) 00:11:38.122 6200.714 - 6225.920: 2.7886% ( 43) 00:11:38.122 6225.920 - 6251.126: 3.1549% ( 49) 00:11:38.122 6251.126 - 6276.332: 3.5810% ( 57) 00:11:38.122 6276.332 - 6301.538: 4.0819% ( 67) 00:11:38.122 6301.538 - 6326.745: 4.6277% ( 73) 00:11:38.122 6326.745 - 6351.951: 5.1959% ( 76) 00:11:38.122 6351.951 - 6377.157: 5.7416% ( 73) 00:11:38.122 6377.157 - 6402.363: 6.3173% ( 77) 00:11:38.122 6402.363 - 6427.569: 6.9079% ( 79) 00:11:38.122 6427.569 - 6452.775: 7.5508% ( 86) 00:11:38.122 6452.775 - 6503.188: 8.8292% ( 171) 00:11:38.122 6503.188 - 6553.600: 10.2198% ( 186) 00:11:38.122 6553.600 - 6604.012: 11.7001% ( 198) 00:11:38.122 6604.012 - 6654.425: 13.1654% ( 196) 00:11:38.122 6654.425 - 6704.837: 14.6456% ( 198) 00:11:38.122 6704.837 - 6755.249: 16.1932% ( 207) 00:11:38.122 6755.249 - 6805.662: 17.8005% ( 215) 00:11:38.122 6805.662 - 6856.074: 19.4079% ( 215) 00:11:38.122 6856.074 - 6906.486: 20.9928% ( 212) 00:11:38.122 6906.486 - 6956.898: 22.5628% ( 210) 00:11:38.122 6956.898 - 7007.311: 24.1253% ( 209) 00:11:38.122 7007.311 - 7057.723: 25.6280% ( 201) 00:11:38.122 7057.723 - 7108.135: 27.0260% ( 187) 00:11:38.122 7108.135 - 7158.548: 28.1848% ( 155) 00:11:38.122 7158.548 - 7208.960: 29.2763% ( 146) 00:11:38.122 7208.960 - 7259.372: 30.2482% ( 130) 00:11:38.122 7259.372 - 7309.785: 31.2350% ( 132) 00:11:38.122 7309.785 - 7360.197: 32.0948% ( 115) 00:11:38.122 7360.197 - 7410.609: 32.9919% ( 120) 00:11:38.122 7410.609 - 7461.022: 33.7993% ( 108) 00:11:38.122 7461.022 - 7511.434: 34.5021% ( 94) 00:11:38.122 7511.434 - 7561.846: 35.1749% ( 90) 00:11:38.122 7561.846 - 7612.258: 35.7506% ( 77) 00:11:38.122 7612.258 - 7662.671: 36.3113% ( 75) 00:11:38.122 7662.671 - 7713.083: 36.8047% ( 66) 00:11:38.122 7713.083 - 7763.495: 37.2981% ( 66) 00:11:38.122 7763.495 - 7813.908: 37.7542% ( 61) 00:11:38.122 7813.908 - 7864.320: 38.1504% ( 53) 00:11:38.122 7864.320 - 7914.732: 38.5766% ( 57) 00:11:38.122 7914.732 - 7965.145: 38.9130% ( 45) 00:11:38.122 7965.145 - 8015.557: 39.2419% ( 44) 00:11:38.122 8015.557 - 8065.969: 39.4961% ( 34) 00:11:38.122 8065.969 - 8116.382: 39.7653% ( 36) 00:11:38.122 8116.382 - 8166.794: 40.0194% ( 34) 00:11:38.122 8166.794 - 8217.206: 40.2661% ( 33) 00:11:38.122 8217.206 - 8267.618: 40.4979% ( 31) 00:11:38.122 8267.618 - 8318.031: 40.7147% ( 29) 00:11:38.122 8318.031 - 8368.443: 40.9240% ( 28) 00:11:38.122 8368.443 - 8418.855: 41.1259% ( 27) 00:11:38.122 8418.855 - 8469.268: 41.3502% ( 30) 00:11:38.122 8469.268 - 8519.680: 41.5520% ( 27) 00:11:38.122 8519.680 - 8570.092: 41.7315% ( 24) 00:11:38.122 8570.092 - 8620.505: 41.9333% ( 27) 00:11:38.122 8620.505 - 8670.917: 42.1950% ( 35) 00:11:38.122 8670.917 - 8721.329: 42.4716% ( 37) 00:11:38.122 8721.329 - 8771.742: 42.7258% ( 34) 00:11:38.122 8771.742 - 8822.154: 42.9874% ( 35) 00:11:38.122 8822.154 - 8872.566: 43.2416% ( 34) 00:11:38.122 8872.566 - 8922.978: 43.5033% ( 35) 00:11:38.122 8922.978 - 8973.391: 43.7575% ( 34) 00:11:38.122 8973.391 - 9023.803: 44.0266% ( 36) 00:11:38.122 9023.803 - 9074.215: 44.3032% ( 37) 00:11:38.122 9074.215 - 9124.628: 44.6172% ( 42) 00:11:38.122 9124.628 - 9175.040: 44.9387% ( 43) 00:11:38.122 9175.040 - 9225.452: 45.3275% ( 52) 00:11:38.122 9225.452 - 9275.865: 45.7611% ( 58) 00:11:38.122 9275.865 - 9326.277: 46.1498% ( 52) 00:11:38.122 9326.277 - 9376.689: 46.5087% ( 48) 00:11:38.122 9376.689 - 9427.102: 46.9199% ( 55) 00:11:38.122 9427.102 - 9477.514: 47.3834% ( 62) 00:11:38.122 9477.514 - 9527.926: 47.7871% ( 54) 00:11:38.122 9527.926 - 9578.338: 48.2132% ( 57) 00:11:38.122 9578.338 - 9628.751: 48.6618% ( 60) 00:11:38.122 9628.751 - 9679.163: 49.1702% ( 68) 00:11:38.122 9679.163 - 9729.575: 49.7533% ( 78) 00:11:38.122 9729.575 - 9779.988: 50.3289% ( 77) 00:11:38.122 9779.988 - 9830.400: 50.9943% ( 89) 00:11:38.122 9830.400 - 9880.812: 51.7644% ( 103) 00:11:38.122 9880.812 - 9931.225: 52.5120% ( 100) 00:11:38.122 9931.225 - 9981.637: 53.2371% ( 97) 00:11:38.122 9981.637 - 10032.049: 54.0894% ( 114) 00:11:38.122 10032.049 - 10082.462: 55.0538% ( 129) 00:11:38.122 10082.462 - 10132.874: 56.2201% ( 156) 00:11:38.122 10132.874 - 10183.286: 57.3340% ( 149) 00:11:38.122 10183.286 - 10233.698: 58.4629% ( 151) 00:11:38.122 10233.698 - 10284.111: 59.5245% ( 142) 00:11:38.122 10284.111 - 10334.523: 60.5712% ( 140) 00:11:38.122 10334.523 - 10384.935: 61.7150% ( 153) 00:11:38.122 10384.935 - 10435.348: 62.9635% ( 167) 00:11:38.122 10435.348 - 10485.760: 64.1447% ( 158) 00:11:38.122 10485.760 - 10536.172: 65.2961% ( 154) 00:11:38.122 10536.172 - 10586.585: 66.3876% ( 146) 00:11:38.122 10586.585 - 10636.997: 67.4492% ( 142) 00:11:38.122 10636.997 - 10687.409: 68.3239% ( 117) 00:11:38.122 10687.409 - 10737.822: 69.2359% ( 122) 00:11:38.122 10737.822 - 10788.234: 70.0882% ( 114) 00:11:38.122 10788.234 - 10838.646: 70.9779% ( 119) 00:11:38.122 10838.646 - 10889.058: 71.9273% ( 127) 00:11:38.122 10889.058 - 10939.471: 72.8544% ( 124) 00:11:38.122 10939.471 - 10989.883: 73.6244% ( 103) 00:11:38.122 10989.883 - 11040.295: 74.3944% ( 103) 00:11:38.122 11040.295 - 11090.708: 75.0822% ( 92) 00:11:38.122 11090.708 - 11141.120: 75.7925% ( 95) 00:11:38.122 11141.120 - 11191.532: 76.4354% ( 86) 00:11:38.122 11191.532 - 11241.945: 77.0410% ( 81) 00:11:38.122 11241.945 - 11292.357: 77.5792% ( 72) 00:11:38.122 11292.357 - 11342.769: 78.1624% ( 78) 00:11:38.122 11342.769 - 11393.182: 78.7978% ( 85) 00:11:38.122 11393.182 - 11443.594: 79.4034% ( 81) 00:11:38.122 11443.594 - 11494.006: 80.0239% ( 83) 00:11:38.122 11494.006 - 11544.418: 80.6071% ( 78) 00:11:38.122 11544.418 - 11594.831: 81.2051% ( 80) 00:11:38.123 11594.831 - 11645.243: 81.8257% ( 83) 00:11:38.123 11645.243 - 11695.655: 82.4387% ( 82) 00:11:38.123 11695.655 - 11746.068: 83.0218% ( 78) 00:11:38.123 11746.068 - 11796.480: 83.5452% ( 70) 00:11:38.123 11796.480 - 11846.892: 84.0535% ( 68) 00:11:38.123 11846.892 - 11897.305: 84.5469% ( 66) 00:11:38.123 11897.305 - 11947.717: 85.0179% ( 63) 00:11:38.123 11947.717 - 11998.129: 85.4067% ( 52) 00:11:38.123 11998.129 - 12048.542: 85.8029% ( 53) 00:11:38.123 12048.542 - 12098.954: 86.2889% ( 65) 00:11:38.123 12098.954 - 12149.366: 86.7300% ( 59) 00:11:38.123 12149.366 - 12199.778: 87.1636% ( 58) 00:11:38.123 12199.778 - 12250.191: 87.5673% ( 54) 00:11:38.123 12250.191 - 12300.603: 87.9411% ( 50) 00:11:38.123 12300.603 - 12351.015: 88.2999% ( 48) 00:11:38.123 12351.015 - 12401.428: 88.6812% ( 51) 00:11:38.123 12401.428 - 12451.840: 89.0401% ( 48) 00:11:38.123 12451.840 - 12502.252: 89.3615% ( 43) 00:11:38.123 12502.252 - 12552.665: 89.6830% ( 43) 00:11:38.123 12552.665 - 12603.077: 90.0867% ( 54) 00:11:38.123 12603.077 - 12653.489: 90.4904% ( 54) 00:11:38.123 12653.489 - 12703.902: 90.8717% ( 51) 00:11:38.123 12703.902 - 12754.314: 91.2605% ( 52) 00:11:38.123 12754.314 - 12804.726: 91.6492% ( 52) 00:11:38.123 12804.726 - 12855.138: 92.0604% ( 55) 00:11:38.123 12855.138 - 12905.551: 92.4043% ( 46) 00:11:38.123 12905.551 - 13006.375: 93.0248% ( 83) 00:11:38.123 13006.375 - 13107.200: 93.5182% ( 66) 00:11:38.123 13107.200 - 13208.025: 93.9294% ( 55) 00:11:38.123 13208.025 - 13308.849: 94.3705% ( 59) 00:11:38.123 13308.849 - 13409.674: 94.7368% ( 49) 00:11:38.123 13409.674 - 13510.498: 95.0135% ( 37) 00:11:38.123 13510.498 - 13611.323: 95.2602% ( 33) 00:11:38.123 13611.323 - 13712.148: 95.4471% ( 25) 00:11:38.123 13712.148 - 13812.972: 95.5742% ( 17) 00:11:38.123 13812.972 - 13913.797: 95.6938% ( 16) 00:11:38.123 13913.797 - 14014.622: 95.8283% ( 18) 00:11:38.123 14014.622 - 14115.446: 95.9554% ( 17) 00:11:38.123 14115.446 - 14216.271: 96.0377% ( 11) 00:11:38.123 14216.271 - 14317.095: 96.1274% ( 12) 00:11:38.123 14317.095 - 14417.920: 96.2022% ( 10) 00:11:38.123 14417.920 - 14518.745: 96.2769% ( 10) 00:11:38.123 14518.745 - 14619.569: 96.3592% ( 11) 00:11:38.123 14619.569 - 14720.394: 96.4937% ( 18) 00:11:38.123 14720.394 - 14821.218: 96.6208% ( 17) 00:11:38.123 14821.218 - 14922.043: 96.7255% ( 14) 00:11:38.123 14922.043 - 15022.868: 96.7928% ( 9) 00:11:38.123 15022.868 - 15123.692: 96.8825% ( 12) 00:11:38.123 15123.692 - 15224.517: 96.9871% ( 14) 00:11:38.123 15224.517 - 15325.342: 97.1516% ( 22) 00:11:38.123 15325.342 - 15426.166: 97.2712% ( 16) 00:11:38.123 15426.166 - 15526.991: 97.4208% ( 20) 00:11:38.123 15526.991 - 15627.815: 97.5404% ( 16) 00:11:38.123 15627.815 - 15728.640: 97.6301% ( 12) 00:11:38.123 15728.640 - 15829.465: 97.7198% ( 12) 00:11:38.123 15829.465 - 15930.289: 97.8095% ( 12) 00:11:38.123 15930.289 - 16031.114: 97.9067% ( 13) 00:11:38.123 16031.114 - 16131.938: 98.0039% ( 13) 00:11:38.123 16131.938 - 16232.763: 98.0487% ( 6) 00:11:38.123 16232.763 - 16333.588: 98.1160% ( 9) 00:11:38.123 16333.588 - 16434.412: 98.1609% ( 6) 00:11:38.123 16434.412 - 16535.237: 98.2057% ( 6) 00:11:38.123 16535.237 - 16636.062: 98.2506% ( 6) 00:11:38.123 16636.062 - 16736.886: 98.2955% ( 6) 00:11:38.123 16736.886 - 16837.711: 98.3478% ( 7) 00:11:38.123 16837.711 - 16938.535: 98.4375% ( 12) 00:11:38.123 16938.535 - 17039.360: 98.5197% ( 11) 00:11:38.123 17039.360 - 17140.185: 98.6094% ( 12) 00:11:38.123 17140.185 - 17241.009: 98.6992% ( 12) 00:11:38.123 17241.009 - 17341.834: 98.7889% ( 12) 00:11:38.123 17341.834 - 17442.658: 98.8188% ( 4) 00:11:38.123 17442.658 - 17543.483: 98.8487% ( 4) 00:11:38.123 17543.483 - 17644.308: 98.8711% ( 3) 00:11:38.123 17644.308 - 17745.132: 98.8935% ( 3) 00:11:38.123 17745.132 - 17845.957: 98.9234% ( 4) 00:11:38.123 17845.957 - 17946.782: 98.9459% ( 3) 00:11:38.123 17946.782 - 18047.606: 98.9758% ( 4) 00:11:38.123 18047.606 - 18148.431: 98.9982% ( 3) 00:11:38.123 18148.431 - 18249.255: 99.0281% ( 4) 00:11:38.123 18249.255 - 18350.080: 99.0431% ( 2) 00:11:38.123 22584.714 - 22685.538: 99.0655% ( 3) 00:11:38.123 22685.538 - 22786.363: 99.1029% ( 5) 00:11:38.123 22786.363 - 22887.188: 99.1178% ( 2) 00:11:38.123 22887.188 - 22988.012: 99.1477% ( 4) 00:11:38.123 22988.012 - 23088.837: 99.1702% ( 3) 00:11:38.123 23088.837 - 23189.662: 99.2001% ( 4) 00:11:38.123 23189.662 - 23290.486: 99.2225% ( 3) 00:11:38.123 23290.486 - 23391.311: 99.2449% ( 3) 00:11:38.123 23391.311 - 23492.135: 99.2673% ( 3) 00:11:38.123 23492.135 - 23592.960: 99.2972% ( 4) 00:11:38.123 23592.960 - 23693.785: 99.3197% ( 3) 00:11:38.123 23693.785 - 23794.609: 99.3496% ( 4) 00:11:38.123 23794.609 - 23895.434: 99.3720% ( 3) 00:11:38.123 23895.434 - 23996.258: 99.3944% ( 3) 00:11:38.123 23996.258 - 24097.083: 99.4243% ( 4) 00:11:38.123 24097.083 - 24197.908: 99.4468% ( 3) 00:11:38.123 24197.908 - 24298.732: 99.4767% ( 4) 00:11:38.123 24298.732 - 24399.557: 99.4991% ( 3) 00:11:38.123 24399.557 - 24500.382: 99.5215% ( 3) 00:11:38.123 30247.385 - 30449.034: 99.5664% ( 6) 00:11:38.123 30449.034 - 30650.683: 99.6187% ( 7) 00:11:38.123 30650.683 - 30852.332: 99.6636% ( 6) 00:11:38.123 30852.332 - 31053.982: 99.7159% ( 7) 00:11:38.123 31053.982 - 31255.631: 99.7757% ( 8) 00:11:38.123 31255.631 - 31457.280: 99.8281% ( 7) 00:11:38.123 31457.280 - 31658.929: 99.8804% ( 7) 00:11:38.123 31658.929 - 31860.578: 99.9327% ( 7) 00:11:38.123 31860.578 - 32062.228: 99.9850% ( 7) 00:11:38.123 32062.228 - 32263.877: 100.0000% ( 2) 00:11:38.123 00:11:38.123 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:11:38.123 ============================================================================== 00:11:38.123 Range in us Cumulative IO count 00:11:38.123 5721.797 - 5747.003: 0.0150% ( 2) 00:11:38.123 5747.003 - 5772.209: 0.0374% ( 3) 00:11:38.123 5772.209 - 5797.415: 0.0673% ( 4) 00:11:38.123 5797.415 - 5822.622: 0.1121% ( 6) 00:11:38.123 5822.622 - 5847.828: 0.1869% ( 10) 00:11:38.123 5847.828 - 5873.034: 0.2617% ( 10) 00:11:38.123 5873.034 - 5898.240: 0.3589% ( 13) 00:11:38.123 5898.240 - 5923.446: 0.4710% ( 15) 00:11:38.123 5923.446 - 5948.652: 0.5831% ( 15) 00:11:38.123 5948.652 - 5973.858: 0.7252% ( 19) 00:11:38.123 5973.858 - 5999.065: 0.8523% ( 17) 00:11:38.123 5999.065 - 6024.271: 1.0018% ( 20) 00:11:38.123 6024.271 - 6049.477: 1.1364% ( 18) 00:11:38.123 6049.477 - 6074.683: 1.3083% ( 23) 00:11:38.123 6074.683 - 6099.889: 1.4728% ( 22) 00:11:38.123 6099.889 - 6125.095: 1.6746% ( 27) 00:11:38.123 6125.095 - 6150.302: 1.8840% ( 28) 00:11:38.123 6150.302 - 6175.508: 2.1232% ( 32) 00:11:38.123 6175.508 - 6200.714: 2.4522% ( 44) 00:11:38.123 6200.714 - 6225.920: 2.7362% ( 38) 00:11:38.123 6225.920 - 6251.126: 3.1624% ( 57) 00:11:38.123 6251.126 - 6276.332: 3.5736% ( 55) 00:11:38.123 6276.332 - 6301.538: 4.0147% ( 59) 00:11:38.123 6301.538 - 6326.745: 4.5230% ( 68) 00:11:38.123 6326.745 - 6351.951: 5.0164% ( 66) 00:11:38.123 6351.951 - 6377.157: 5.5398% ( 70) 00:11:38.123 6377.157 - 6402.363: 6.1229% ( 78) 00:11:38.123 6402.363 - 6427.569: 6.7210% ( 80) 00:11:38.123 6427.569 - 6452.775: 7.3041% ( 78) 00:11:38.123 6452.775 - 6503.188: 8.6498% ( 180) 00:11:38.123 6503.188 - 6553.600: 10.0553% ( 188) 00:11:38.123 6553.600 - 6604.012: 11.4459% ( 186) 00:11:38.123 6604.012 - 6654.425: 12.7542% ( 175) 00:11:38.123 6654.425 - 6704.837: 14.2943% ( 206) 00:11:38.123 6704.837 - 6755.249: 15.8867% ( 213) 00:11:38.123 6755.249 - 6805.662: 17.4342% ( 207) 00:11:38.123 6805.662 - 6856.074: 18.9967% ( 209) 00:11:38.123 6856.074 - 6906.486: 20.5368% ( 206) 00:11:38.123 6906.486 - 6956.898: 22.2264% ( 226) 00:11:38.123 6956.898 - 7007.311: 23.8562% ( 218) 00:11:38.123 7007.311 - 7057.723: 25.3813% ( 204) 00:11:38.123 7057.723 - 7108.135: 26.8840% ( 201) 00:11:38.123 7108.135 - 7158.548: 28.1474% ( 169) 00:11:38.123 7158.548 - 7208.960: 29.3286% ( 158) 00:11:38.123 7208.960 - 7259.372: 30.3603% ( 138) 00:11:38.123 7259.372 - 7309.785: 31.3771% ( 136) 00:11:38.123 7309.785 - 7360.197: 32.2742% ( 120) 00:11:38.123 7360.197 - 7410.609: 33.0891% ( 109) 00:11:38.123 7410.609 - 7461.022: 33.9115% ( 110) 00:11:38.123 7461.022 - 7511.434: 34.6367% ( 97) 00:11:38.123 7511.434 - 7561.846: 35.3394% ( 94) 00:11:38.123 7561.846 - 7612.258: 35.9076% ( 76) 00:11:38.124 7612.258 - 7662.671: 36.4608% ( 74) 00:11:38.124 7662.671 - 7713.083: 37.0066% ( 73) 00:11:38.124 7713.083 - 7763.495: 37.5224% ( 69) 00:11:38.124 7763.495 - 7813.908: 38.0009% ( 64) 00:11:38.124 7813.908 - 7864.320: 38.4569% ( 61) 00:11:38.124 7864.320 - 7914.732: 38.8532% ( 53) 00:11:38.124 7914.732 - 7965.145: 39.2120% ( 48) 00:11:38.124 7965.145 - 8015.557: 39.5410% ( 44) 00:11:38.124 8015.557 - 8065.969: 39.8251% ( 38) 00:11:38.124 8065.969 - 8116.382: 40.1166% ( 39) 00:11:38.124 8116.382 - 8166.794: 40.3932% ( 37) 00:11:38.124 8166.794 - 8217.206: 40.6624% ( 36) 00:11:38.124 8217.206 - 8267.618: 40.9166% ( 34) 00:11:38.124 8267.618 - 8318.031: 41.1633% ( 33) 00:11:38.124 8318.031 - 8368.443: 41.4474% ( 38) 00:11:38.124 8368.443 - 8418.855: 41.6642% ( 29) 00:11:38.124 8418.855 - 8469.268: 41.8735% ( 28) 00:11:38.124 8469.268 - 8519.680: 42.1202% ( 33) 00:11:38.124 8519.680 - 8570.092: 42.3744% ( 34) 00:11:38.124 8570.092 - 8620.505: 42.5987% ( 30) 00:11:38.124 8620.505 - 8670.917: 42.7856% ( 25) 00:11:38.124 8670.917 - 8721.329: 42.9426% ( 21) 00:11:38.124 8721.329 - 8771.742: 43.0547% ( 15) 00:11:38.124 8771.742 - 8822.154: 43.1818% ( 17) 00:11:38.124 8822.154 - 8872.566: 43.2940% ( 15) 00:11:38.124 8872.566 - 8922.978: 43.4734% ( 24) 00:11:38.124 8922.978 - 8973.391: 43.7425% ( 36) 00:11:38.124 8973.391 - 9023.803: 44.0191% ( 37) 00:11:38.124 9023.803 - 9074.215: 44.3107% ( 39) 00:11:38.124 9074.215 - 9124.628: 44.6023% ( 39) 00:11:38.124 9124.628 - 9175.040: 44.9761% ( 50) 00:11:38.124 9175.040 - 9225.452: 45.2901% ( 42) 00:11:38.124 9225.452 - 9275.865: 45.6788% ( 52) 00:11:38.124 9275.865 - 9326.277: 46.0676% ( 52) 00:11:38.124 9326.277 - 9376.689: 46.4638% ( 53) 00:11:38.124 9376.689 - 9427.102: 46.8526% ( 52) 00:11:38.124 9427.102 - 9477.514: 47.2638% ( 55) 00:11:38.124 9477.514 - 9527.926: 47.6974% ( 58) 00:11:38.124 9527.926 - 9578.338: 48.2356% ( 72) 00:11:38.124 9578.338 - 9628.751: 48.8337% ( 80) 00:11:38.124 9628.751 - 9679.163: 49.4169% ( 78) 00:11:38.124 9679.163 - 9729.575: 50.0822% ( 89) 00:11:38.124 9729.575 - 9779.988: 50.7626% ( 91) 00:11:38.124 9779.988 - 9830.400: 51.4279% ( 89) 00:11:38.124 9830.400 - 9880.812: 52.1606% ( 98) 00:11:38.124 9880.812 - 9931.225: 52.9306% ( 103) 00:11:38.124 9931.225 - 9981.637: 53.8053% ( 117) 00:11:38.124 9981.637 - 10032.049: 54.6725% ( 116) 00:11:38.124 10032.049 - 10082.462: 55.6594% ( 132) 00:11:38.124 10082.462 - 10132.874: 56.7733% ( 149) 00:11:38.124 10132.874 - 10183.286: 57.8798% ( 148) 00:11:38.124 10183.286 - 10233.698: 58.9563% ( 144) 00:11:38.124 10233.698 - 10284.111: 60.0628% ( 148) 00:11:38.124 10284.111 - 10334.523: 61.3711% ( 175) 00:11:38.124 10334.523 - 10384.935: 62.5748% ( 161) 00:11:38.124 10384.935 - 10435.348: 63.8606% ( 172) 00:11:38.124 10435.348 - 10485.760: 65.0493% ( 159) 00:11:38.124 10485.760 - 10536.172: 66.1558% ( 148) 00:11:38.124 10536.172 - 10586.585: 67.1950% ( 139) 00:11:38.124 10586.585 - 10636.997: 68.1519% ( 128) 00:11:38.124 10636.997 - 10687.409: 69.1313% ( 131) 00:11:38.124 10687.409 - 10737.822: 70.0658% ( 125) 00:11:38.124 10737.822 - 10788.234: 70.9779% ( 122) 00:11:38.124 10788.234 - 10838.646: 71.9124% ( 125) 00:11:38.124 10838.646 - 10889.058: 72.7347% ( 110) 00:11:38.124 10889.058 - 10939.471: 73.5048% ( 103) 00:11:38.124 10939.471 - 10989.883: 74.3122% ( 108) 00:11:38.124 10989.883 - 11040.295: 75.0449% ( 98) 00:11:38.124 11040.295 - 11090.708: 75.7850% ( 99) 00:11:38.124 11090.708 - 11141.120: 76.4055% ( 83) 00:11:38.124 11141.120 - 11191.532: 77.0036% ( 80) 00:11:38.124 11191.532 - 11241.945: 77.5792% ( 77) 00:11:38.124 11241.945 - 11292.357: 78.1549% ( 77) 00:11:38.124 11292.357 - 11342.769: 78.8203% ( 89) 00:11:38.124 11342.769 - 11393.182: 79.4931% ( 90) 00:11:38.124 11393.182 - 11443.594: 80.0688% ( 77) 00:11:38.124 11443.594 - 11494.006: 80.6370% ( 76) 00:11:38.124 11494.006 - 11544.418: 81.2425% ( 81) 00:11:38.124 11544.418 - 11594.831: 81.8331% ( 79) 00:11:38.124 11594.831 - 11645.243: 82.3864% ( 74) 00:11:38.124 11645.243 - 11695.655: 82.9919% ( 81) 00:11:38.124 11695.655 - 11746.068: 83.5452% ( 74) 00:11:38.124 11746.068 - 11796.480: 84.0834% ( 72) 00:11:38.124 11796.480 - 11846.892: 84.5993% ( 69) 00:11:38.124 11846.892 - 11897.305: 85.0778% ( 64) 00:11:38.124 11897.305 - 11947.717: 85.5413% ( 62) 00:11:38.124 11947.717 - 11998.129: 85.9674% ( 57) 00:11:38.124 11998.129 - 12048.542: 86.3263% ( 48) 00:11:38.124 12048.542 - 12098.954: 86.7075% ( 51) 00:11:38.124 12098.954 - 12149.366: 87.1112% ( 54) 00:11:38.124 12149.366 - 12199.778: 87.4925% ( 51) 00:11:38.124 12199.778 - 12250.191: 87.8140% ( 43) 00:11:38.124 12250.191 - 12300.603: 88.1579% ( 46) 00:11:38.124 12300.603 - 12351.015: 88.4420% ( 38) 00:11:38.124 12351.015 - 12401.428: 88.7336% ( 39) 00:11:38.124 12401.428 - 12451.840: 89.0401% ( 41) 00:11:38.124 12451.840 - 12502.252: 89.3690% ( 44) 00:11:38.124 12502.252 - 12552.665: 89.7204% ( 47) 00:11:38.124 12552.665 - 12603.077: 90.0419% ( 43) 00:11:38.124 12603.077 - 12653.489: 90.3185% ( 37) 00:11:38.124 12653.489 - 12703.902: 90.6923% ( 50) 00:11:38.124 12703.902 - 12754.314: 91.0661% ( 50) 00:11:38.124 12754.314 - 12804.726: 91.3502% ( 38) 00:11:38.124 12804.726 - 12855.138: 91.6717% ( 43) 00:11:38.124 12855.138 - 12905.551: 91.9483% ( 37) 00:11:38.124 12905.551 - 13006.375: 92.4940% ( 73) 00:11:38.124 13006.375 - 13107.200: 92.9575% ( 62) 00:11:38.124 13107.200 - 13208.025: 93.4136% ( 61) 00:11:38.124 13208.025 - 13308.849: 93.9145% ( 67) 00:11:38.124 13308.849 - 13409.674: 94.3929% ( 64) 00:11:38.124 13409.674 - 13510.498: 94.8639% ( 63) 00:11:38.124 13510.498 - 13611.323: 95.1854% ( 43) 00:11:38.124 13611.323 - 13712.148: 95.3275% ( 19) 00:11:38.124 13712.148 - 13812.972: 95.3947% ( 9) 00:11:38.124 13812.972 - 13913.797: 95.4396% ( 6) 00:11:38.124 13913.797 - 14014.622: 95.4844% ( 6) 00:11:38.124 14014.622 - 14115.446: 95.5218% ( 5) 00:11:38.124 14115.446 - 14216.271: 95.5667% ( 6) 00:11:38.124 14216.271 - 14317.095: 95.6115% ( 6) 00:11:38.124 14317.095 - 14417.920: 95.6489% ( 5) 00:11:38.124 14417.920 - 14518.745: 95.7760% ( 17) 00:11:38.124 14518.745 - 14619.569: 95.8059% ( 4) 00:11:38.124 14619.569 - 14720.394: 95.9031% ( 13) 00:11:38.124 14720.394 - 14821.218: 96.0526% ( 20) 00:11:38.124 14821.218 - 14922.043: 96.1947% ( 19) 00:11:38.124 14922.043 - 15022.868: 96.3292% ( 18) 00:11:38.124 15022.868 - 15123.692: 96.4713% ( 19) 00:11:38.124 15123.692 - 15224.517: 96.6208% ( 20) 00:11:38.124 15224.517 - 15325.342: 96.8227% ( 27) 00:11:38.124 15325.342 - 15426.166: 96.9946% ( 23) 00:11:38.124 15426.166 - 15526.991: 97.1367% ( 19) 00:11:38.124 15526.991 - 15627.815: 97.2787% ( 19) 00:11:38.124 15627.815 - 15728.640: 97.3684% ( 12) 00:11:38.124 15728.640 - 15829.465: 97.4432% ( 10) 00:11:38.124 15829.465 - 15930.289: 97.4955% ( 7) 00:11:38.124 15930.289 - 16031.114: 97.5404% ( 6) 00:11:38.124 16031.114 - 16131.938: 97.5927% ( 7) 00:11:38.124 16131.938 - 16232.763: 97.6077% ( 2) 00:11:38.124 16333.588 - 16434.412: 97.6600% ( 7) 00:11:38.124 16434.412 - 16535.237: 97.7347% ( 10) 00:11:38.124 16535.237 - 16636.062: 97.8170% ( 11) 00:11:38.124 16636.062 - 16736.886: 97.8917% ( 10) 00:11:38.124 16736.886 - 16837.711: 98.0039% ( 15) 00:11:38.124 16837.711 - 16938.535: 98.1086% ( 14) 00:11:38.124 16938.535 - 17039.360: 98.2431% ( 18) 00:11:38.124 17039.360 - 17140.185: 98.3777% ( 18) 00:11:38.124 17140.185 - 17241.009: 98.5048% ( 17) 00:11:38.124 17241.009 - 17341.834: 98.6468% ( 19) 00:11:38.124 17341.834 - 17442.658: 98.7739% ( 17) 00:11:38.124 17442.658 - 17543.483: 98.8636% ( 12) 00:11:38.124 17543.483 - 17644.308: 98.9160% ( 7) 00:11:38.124 17644.308 - 17745.132: 98.9608% ( 6) 00:11:38.124 17745.132 - 17845.957: 99.0057% ( 6) 00:11:38.124 17845.957 - 17946.782: 99.0431% ( 5) 00:11:38.124 21072.345 - 21173.169: 99.0655% ( 3) 00:11:38.124 21173.169 - 21273.994: 99.0879% ( 3) 00:11:38.124 21273.994 - 21374.818: 99.1178% ( 4) 00:11:38.124 21374.818 - 21475.643: 99.1403% ( 3) 00:11:38.124 21475.643 - 21576.468: 99.1627% ( 3) 00:11:38.124 21576.468 - 21677.292: 99.1851% ( 3) 00:11:38.124 21677.292 - 21778.117: 99.2150% ( 4) 00:11:38.124 21778.117 - 21878.942: 99.2374% ( 3) 00:11:38.124 21878.942 - 21979.766: 99.2599% ( 3) 00:11:38.124 21979.766 - 22080.591: 99.2898% ( 4) 00:11:38.124 22080.591 - 22181.415: 99.3122% ( 3) 00:11:38.124 22181.415 - 22282.240: 99.3421% ( 4) 00:11:38.124 22282.240 - 22383.065: 99.3645% ( 3) 00:11:38.124 22383.065 - 22483.889: 99.3870% ( 3) 00:11:38.124 22483.889 - 22584.714: 99.4094% ( 3) 00:11:38.124 22584.714 - 22685.538: 99.4393% ( 4) 00:11:38.124 22685.538 - 22786.363: 99.4617% ( 3) 00:11:38.124 22786.363 - 22887.188: 99.4842% ( 3) 00:11:38.124 22887.188 - 22988.012: 99.5141% ( 4) 00:11:38.124 22988.012 - 23088.837: 99.5215% ( 1) 00:11:38.124 29037.489 - 29239.138: 99.5739% ( 7) 00:11:38.124 29239.138 - 29440.788: 99.6262% ( 7) 00:11:38.124 29440.788 - 29642.437: 99.6860% ( 8) 00:11:38.124 29642.437 - 29844.086: 99.7309% ( 6) 00:11:38.124 29844.086 - 30045.735: 99.7832% ( 7) 00:11:38.124 30045.735 - 30247.385: 99.8355% ( 7) 00:11:38.124 30247.385 - 30449.034: 99.8879% ( 7) 00:11:38.124 30449.034 - 30650.683: 99.9402% ( 7) 00:11:38.124 30650.683 - 30852.332: 99.9925% ( 7) 00:11:38.124 30852.332 - 31053.982: 100.0000% ( 1) 00:11:38.124 00:11:38.124 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:11:38.124 ============================================================================== 00:11:38.125 Range in us Cumulative IO count 00:11:38.125 5721.797 - 5747.003: 0.0075% ( 1) 00:11:38.125 5747.003 - 5772.209: 0.0299% ( 3) 00:11:38.125 5772.209 - 5797.415: 0.0598% ( 4) 00:11:38.125 5797.415 - 5822.622: 0.0972% ( 5) 00:11:38.125 5822.622 - 5847.828: 0.1570% ( 8) 00:11:38.125 5847.828 - 5873.034: 0.1944% ( 5) 00:11:38.125 5873.034 - 5898.240: 0.2691% ( 10) 00:11:38.125 5898.240 - 5923.446: 0.3738% ( 14) 00:11:38.125 5923.446 - 5948.652: 0.4859% ( 15) 00:11:38.125 5948.652 - 5973.858: 0.6280% ( 19) 00:11:38.125 5973.858 - 5999.065: 0.7775% ( 20) 00:11:38.125 5999.065 - 6024.271: 0.9495% ( 23) 00:11:38.125 6024.271 - 6049.477: 1.1139% ( 22) 00:11:38.125 6049.477 - 6074.683: 1.2560% ( 19) 00:11:38.125 6074.683 - 6099.889: 1.4578% ( 27) 00:11:38.125 6099.889 - 6125.095: 1.7120% ( 34) 00:11:38.125 6125.095 - 6150.302: 1.9587% ( 33) 00:11:38.125 6150.302 - 6175.508: 2.2578% ( 40) 00:11:38.125 6175.508 - 6200.714: 2.5867% ( 44) 00:11:38.125 6200.714 - 6225.920: 2.9157% ( 44) 00:11:38.125 6225.920 - 6251.126: 3.2446% ( 44) 00:11:38.125 6251.126 - 6276.332: 3.5885% ( 46) 00:11:38.125 6276.332 - 6301.538: 4.0221% ( 58) 00:11:38.125 6301.538 - 6326.745: 4.4632% ( 59) 00:11:38.125 6326.745 - 6351.951: 5.0314% ( 76) 00:11:38.125 6351.951 - 6377.157: 5.5846% ( 74) 00:11:38.125 6377.157 - 6402.363: 6.1453% ( 75) 00:11:38.125 6402.363 - 6427.569: 6.6687% ( 70) 00:11:38.125 6427.569 - 6452.775: 7.2667% ( 80) 00:11:38.125 6452.775 - 6503.188: 8.5676% ( 174) 00:11:38.125 6503.188 - 6553.600: 9.9806% ( 189) 00:11:38.125 6553.600 - 6604.012: 11.4833% ( 201) 00:11:38.125 6604.012 - 6654.425: 12.9635% ( 198) 00:11:38.125 6654.425 - 6704.837: 14.3615% ( 187) 00:11:38.125 6704.837 - 6755.249: 15.8642% ( 201) 00:11:38.125 6755.249 - 6805.662: 17.3744% ( 202) 00:11:38.125 6805.662 - 6856.074: 19.0191% ( 220) 00:11:38.125 6856.074 - 6906.486: 20.6863% ( 223) 00:11:38.125 6906.486 - 6956.898: 22.3385% ( 221) 00:11:38.125 6956.898 - 7007.311: 23.9309% ( 213) 00:11:38.125 7007.311 - 7057.723: 25.4635% ( 205) 00:11:38.125 7057.723 - 7108.135: 26.8989% ( 192) 00:11:38.125 7108.135 - 7158.548: 28.2596% ( 182) 00:11:38.125 7158.548 - 7208.960: 29.4782% ( 163) 00:11:38.125 7208.960 - 7259.372: 30.5622% ( 145) 00:11:38.125 7259.372 - 7309.785: 31.6537% ( 146) 00:11:38.125 7309.785 - 7360.197: 32.5957% ( 126) 00:11:38.125 7360.197 - 7410.609: 33.4928% ( 120) 00:11:38.125 7410.609 - 7461.022: 34.3376% ( 113) 00:11:38.125 7461.022 - 7511.434: 34.9806% ( 86) 00:11:38.125 7511.434 - 7561.846: 35.6534% ( 90) 00:11:38.125 7561.846 - 7612.258: 36.2216% ( 76) 00:11:38.125 7612.258 - 7662.671: 36.8197% ( 80) 00:11:38.125 7662.671 - 7713.083: 37.3206% ( 67) 00:11:38.125 7713.083 - 7763.495: 37.8065% ( 65) 00:11:38.125 7763.495 - 7813.908: 38.2252% ( 56) 00:11:38.125 7813.908 - 7864.320: 38.6139% ( 52) 00:11:38.125 7864.320 - 7914.732: 38.9504% ( 45) 00:11:38.125 7914.732 - 7965.145: 39.2718% ( 43) 00:11:38.125 7965.145 - 8015.557: 39.5260% ( 34) 00:11:38.125 8015.557 - 8065.969: 39.7727% ( 33) 00:11:38.125 8065.969 - 8116.382: 40.0269% ( 34) 00:11:38.125 8116.382 - 8166.794: 40.3409% ( 42) 00:11:38.125 8166.794 - 8217.206: 40.6474% ( 41) 00:11:38.125 8217.206 - 8267.618: 40.9315% ( 38) 00:11:38.125 8267.618 - 8318.031: 41.1782% ( 33) 00:11:38.125 8318.031 - 8368.443: 41.4249% ( 33) 00:11:38.125 8368.443 - 8418.855: 41.6567% ( 31) 00:11:38.125 8418.855 - 8469.268: 41.8586% ( 27) 00:11:38.125 8469.268 - 8519.680: 42.0529% ( 26) 00:11:38.125 8519.680 - 8570.092: 42.2174% ( 22) 00:11:38.125 8570.092 - 8620.505: 42.4267% ( 28) 00:11:38.125 8620.505 - 8670.917: 42.6361% ( 28) 00:11:38.125 8670.917 - 8721.329: 42.8005% ( 22) 00:11:38.125 8721.329 - 8771.742: 43.0024% ( 27) 00:11:38.125 8771.742 - 8822.154: 43.2267% ( 30) 00:11:38.125 8822.154 - 8872.566: 43.4734% ( 33) 00:11:38.125 8872.566 - 8922.978: 43.6827% ( 28) 00:11:38.125 8922.978 - 8973.391: 43.9145% ( 31) 00:11:38.125 8973.391 - 9023.803: 44.1313% ( 29) 00:11:38.125 9023.803 - 9074.215: 44.3705% ( 32) 00:11:38.125 9074.215 - 9124.628: 44.6397% ( 36) 00:11:38.125 9124.628 - 9175.040: 44.9237% ( 38) 00:11:38.125 9175.040 - 9225.452: 45.2602% ( 45) 00:11:38.125 9225.452 - 9275.865: 45.6041% ( 46) 00:11:38.125 9275.865 - 9326.277: 45.9629% ( 48) 00:11:38.125 9326.277 - 9376.689: 46.3592% ( 53) 00:11:38.125 9376.689 - 9427.102: 46.8077% ( 60) 00:11:38.125 9427.102 - 9477.514: 47.3385% ( 71) 00:11:38.125 9477.514 - 9527.926: 47.8319% ( 66) 00:11:38.125 9527.926 - 9578.338: 48.3478% ( 69) 00:11:38.125 9578.338 - 9628.751: 48.8861% ( 72) 00:11:38.125 9628.751 - 9679.163: 49.4767% ( 79) 00:11:38.125 9679.163 - 9729.575: 50.0897% ( 82) 00:11:38.125 9729.575 - 9779.988: 50.7252% ( 85) 00:11:38.125 9779.988 - 9830.400: 51.3831% ( 88) 00:11:38.125 9830.400 - 9880.812: 52.1382% ( 101) 00:11:38.125 9880.812 - 9931.225: 52.8858% ( 100) 00:11:38.125 9931.225 - 9981.637: 53.8876% ( 134) 00:11:38.125 9981.637 - 10032.049: 54.9791% ( 146) 00:11:38.125 10032.049 - 10082.462: 55.9435% ( 129) 00:11:38.125 10082.462 - 10132.874: 56.9453% ( 134) 00:11:38.125 10132.874 - 10183.286: 57.9172% ( 130) 00:11:38.125 10183.286 - 10233.698: 58.9862% ( 143) 00:11:38.125 10233.698 - 10284.111: 60.0703% ( 145) 00:11:38.125 10284.111 - 10334.523: 61.0945% ( 137) 00:11:38.125 10334.523 - 10384.935: 62.1187% ( 137) 00:11:38.125 10384.935 - 10435.348: 63.0682% ( 127) 00:11:38.125 10435.348 - 10485.760: 63.9952% ( 124) 00:11:38.125 10485.760 - 10536.172: 65.1391% ( 153) 00:11:38.125 10536.172 - 10586.585: 66.1932% ( 141) 00:11:38.125 10586.585 - 10636.997: 67.1875% ( 133) 00:11:38.125 10636.997 - 10687.409: 68.1519% ( 129) 00:11:38.125 10687.409 - 10737.822: 69.2060% ( 141) 00:11:38.125 10737.822 - 10788.234: 70.1854% ( 131) 00:11:38.125 10788.234 - 10838.646: 71.0825% ( 120) 00:11:38.125 10838.646 - 10889.058: 71.8825% ( 107) 00:11:38.125 10889.058 - 10939.471: 72.7123% ( 111) 00:11:38.125 10939.471 - 10989.883: 73.5272% ( 109) 00:11:38.125 10989.883 - 11040.295: 74.4842% ( 128) 00:11:38.125 11040.295 - 11090.708: 75.2691% ( 105) 00:11:38.125 11090.708 - 11141.120: 76.1065% ( 112) 00:11:38.125 11141.120 - 11191.532: 76.8541% ( 100) 00:11:38.125 11191.532 - 11241.945: 77.6316% ( 104) 00:11:38.125 11241.945 - 11292.357: 78.3717% ( 99) 00:11:38.125 11292.357 - 11342.769: 79.0595% ( 92) 00:11:38.125 11342.769 - 11393.182: 79.7697% ( 95) 00:11:38.125 11393.182 - 11443.594: 80.4127% ( 86) 00:11:38.125 11443.594 - 11494.006: 81.0481% ( 85) 00:11:38.125 11494.006 - 11544.418: 81.6537% ( 81) 00:11:38.125 11544.418 - 11594.831: 82.2593% ( 81) 00:11:38.125 11594.831 - 11645.243: 82.9022% ( 86) 00:11:38.125 11645.243 - 11695.655: 83.4853% ( 78) 00:11:38.125 11695.655 - 11746.068: 84.0535% ( 76) 00:11:38.125 11746.068 - 11796.480: 84.6591% ( 81) 00:11:38.125 11796.480 - 11846.892: 85.1600% ( 67) 00:11:38.125 11846.892 - 11897.305: 85.6459% ( 65) 00:11:38.125 11897.305 - 11947.717: 86.0870% ( 59) 00:11:38.125 11947.717 - 11998.129: 86.5206% ( 58) 00:11:38.125 11998.129 - 12048.542: 86.8944% ( 50) 00:11:38.125 12048.542 - 12098.954: 87.2234% ( 44) 00:11:38.125 12098.954 - 12149.366: 87.5523% ( 44) 00:11:38.125 12149.366 - 12199.778: 87.8589% ( 41) 00:11:38.125 12199.778 - 12250.191: 88.1803% ( 43) 00:11:38.125 12250.191 - 12300.603: 88.5093% ( 44) 00:11:38.125 12300.603 - 12351.015: 88.8756% ( 49) 00:11:38.125 12351.015 - 12401.428: 89.2494% ( 50) 00:11:38.125 12401.428 - 12451.840: 89.6008% ( 47) 00:11:38.125 12451.840 - 12502.252: 89.9895% ( 52) 00:11:38.125 12502.252 - 12552.665: 90.3260% ( 45) 00:11:38.125 12552.665 - 12603.077: 90.5951% ( 36) 00:11:38.125 12603.077 - 12653.489: 90.8119% ( 29) 00:11:38.125 12653.489 - 12703.902: 91.0063% ( 26) 00:11:38.125 12703.902 - 12754.314: 91.2081% ( 27) 00:11:38.125 12754.314 - 12804.726: 91.3950% ( 25) 00:11:38.125 12804.726 - 12855.138: 91.6343% ( 32) 00:11:38.125 12855.138 - 12905.551: 91.8511% ( 29) 00:11:38.125 12905.551 - 13006.375: 92.3221% ( 63) 00:11:38.125 13006.375 - 13107.200: 92.8903% ( 76) 00:11:38.125 13107.200 - 13208.025: 93.2940% ( 54) 00:11:38.125 13208.025 - 13308.849: 93.6453% ( 47) 00:11:38.125 13308.849 - 13409.674: 94.0117% ( 49) 00:11:38.125 13409.674 - 13510.498: 94.3331% ( 43) 00:11:38.125 13510.498 - 13611.323: 94.6696% ( 45) 00:11:38.125 13611.323 - 13712.148: 94.9686% ( 40) 00:11:38.125 13712.148 - 13812.972: 95.1555% ( 25) 00:11:38.125 13812.972 - 13913.797: 95.2901% ( 18) 00:11:38.125 13913.797 - 14014.622: 95.4471% ( 21) 00:11:38.125 14014.622 - 14115.446: 95.5891% ( 19) 00:11:38.125 14115.446 - 14216.271: 95.7386% ( 20) 00:11:38.125 14216.271 - 14317.095: 95.7910% ( 7) 00:11:38.125 14317.095 - 14417.920: 95.8283% ( 5) 00:11:38.125 14417.920 - 14518.745: 95.8956% ( 9) 00:11:38.125 14518.745 - 14619.569: 95.9853% ( 12) 00:11:38.125 14619.569 - 14720.394: 96.0676% ( 11) 00:11:38.125 14720.394 - 14821.218: 96.1947% ( 17) 00:11:38.125 14821.218 - 14922.043: 96.3292% ( 18) 00:11:38.125 14922.043 - 15022.868: 96.4638% ( 18) 00:11:38.125 15022.868 - 15123.692: 96.5984% ( 18) 00:11:38.125 15123.692 - 15224.517: 96.7180% ( 16) 00:11:38.125 15224.517 - 15325.342: 96.8077% ( 12) 00:11:38.125 15325.342 - 15426.166: 96.8900% ( 11) 00:11:38.125 15426.166 - 15526.991: 96.9871% ( 13) 00:11:38.125 15526.991 - 15627.815: 97.0619% ( 10) 00:11:38.125 15627.815 - 15728.640: 97.1666% ( 14) 00:11:38.125 15728.640 - 15829.465: 97.2787% ( 15) 00:11:38.126 15829.465 - 15930.289: 97.3684% ( 12) 00:11:38.126 15930.289 - 16031.114: 97.4507% ( 11) 00:11:38.126 16031.114 - 16131.938: 97.5478% ( 13) 00:11:38.126 16131.938 - 16232.763: 97.6450% ( 13) 00:11:38.126 16232.763 - 16333.588: 97.7347% ( 12) 00:11:38.126 16333.588 - 16434.412: 97.8245% ( 12) 00:11:38.126 16434.412 - 16535.237: 97.9142% ( 12) 00:11:38.126 16535.237 - 16636.062: 98.0188% ( 14) 00:11:38.126 16636.062 - 16736.886: 98.0936% ( 10) 00:11:38.126 16736.886 - 16837.711: 98.1684% ( 10) 00:11:38.126 16837.711 - 16938.535: 98.2656% ( 13) 00:11:38.126 16938.535 - 17039.360: 98.3478% ( 11) 00:11:38.126 17039.360 - 17140.185: 98.4375% ( 12) 00:11:38.126 17140.185 - 17241.009: 98.5422% ( 14) 00:11:38.126 17241.009 - 17341.834: 98.6319% ( 12) 00:11:38.126 17341.834 - 17442.658: 98.7291% ( 13) 00:11:38.126 17442.658 - 17543.483: 98.8188% ( 12) 00:11:38.126 17543.483 - 17644.308: 98.9085% ( 12) 00:11:38.126 17644.308 - 17745.132: 98.9608% ( 7) 00:11:38.126 17745.132 - 17845.957: 99.0057% ( 6) 00:11:38.126 17845.957 - 17946.782: 99.0431% ( 5) 00:11:38.126 19559.975 - 19660.800: 99.0505% ( 1) 00:11:38.126 19660.800 - 19761.625: 99.0730% ( 3) 00:11:38.126 19761.625 - 19862.449: 99.1029% ( 4) 00:11:38.126 19862.449 - 19963.274: 99.1253% ( 3) 00:11:38.126 19963.274 - 20064.098: 99.1552% ( 4) 00:11:38.126 20064.098 - 20164.923: 99.1851% ( 4) 00:11:38.126 20164.923 - 20265.748: 99.2075% ( 3) 00:11:38.126 20265.748 - 20366.572: 99.2374% ( 4) 00:11:38.126 20366.572 - 20467.397: 99.2599% ( 3) 00:11:38.126 20467.397 - 20568.222: 99.2823% ( 3) 00:11:38.126 20568.222 - 20669.046: 99.3047% ( 3) 00:11:38.126 20669.046 - 20769.871: 99.3272% ( 3) 00:11:38.126 20769.871 - 20870.695: 99.3496% ( 3) 00:11:38.126 20870.695 - 20971.520: 99.3720% ( 3) 00:11:38.126 20971.520 - 21072.345: 99.4019% ( 4) 00:11:38.126 21072.345 - 21173.169: 99.4243% ( 3) 00:11:38.126 21173.169 - 21273.994: 99.4542% ( 4) 00:11:38.126 21273.994 - 21374.818: 99.4767% ( 3) 00:11:38.126 21374.818 - 21475.643: 99.5066% ( 4) 00:11:38.126 21475.643 - 21576.468: 99.5215% ( 2) 00:11:38.126 27625.945 - 27827.594: 99.5290% ( 1) 00:11:38.126 27827.594 - 28029.243: 99.5739% ( 6) 00:11:38.126 28029.243 - 28230.892: 99.6337% ( 8) 00:11:38.126 28230.892 - 28432.542: 99.6785% ( 6) 00:11:38.126 28432.542 - 28634.191: 99.7309% ( 7) 00:11:38.126 28634.191 - 28835.840: 99.7832% ( 7) 00:11:38.126 28835.840 - 29037.489: 99.8355% ( 7) 00:11:38.126 29037.489 - 29239.138: 99.8879% ( 7) 00:11:38.126 29239.138 - 29440.788: 99.9402% ( 7) 00:11:38.126 29440.788 - 29642.437: 99.9925% ( 7) 00:11:38.126 29642.437 - 29844.086: 100.0000% ( 1) 00:11:38.126 00:11:38.126 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:11:38.126 ============================================================================== 00:11:38.126 Range in us Cumulative IO count 00:11:38.126 5671.385 - 5696.591: 0.0224% ( 3) 00:11:38.126 5696.591 - 5721.797: 0.0374% ( 2) 00:11:38.126 5721.797 - 5747.003: 0.0449% ( 1) 00:11:38.126 5747.003 - 5772.209: 0.0748% ( 4) 00:11:38.126 5772.209 - 5797.415: 0.1196% ( 6) 00:11:38.126 5797.415 - 5822.622: 0.1869% ( 9) 00:11:38.126 5822.622 - 5847.828: 0.2617% ( 10) 00:11:38.126 5847.828 - 5873.034: 0.3439% ( 11) 00:11:38.126 5873.034 - 5898.240: 0.4187% ( 10) 00:11:38.126 5898.240 - 5923.446: 0.4934% ( 10) 00:11:38.126 5923.446 - 5948.652: 0.5757% ( 11) 00:11:38.126 5948.652 - 5973.858: 0.6579% ( 11) 00:11:38.126 5973.858 - 5999.065: 0.7775% ( 16) 00:11:38.126 5999.065 - 6024.271: 0.9270% ( 20) 00:11:38.126 6024.271 - 6049.477: 1.1588% ( 31) 00:11:38.126 6049.477 - 6074.683: 1.3756% ( 29) 00:11:38.126 6074.683 - 6099.889: 1.6074% ( 31) 00:11:38.126 6099.889 - 6125.095: 1.8615% ( 34) 00:11:38.126 6125.095 - 6150.302: 2.1232% ( 35) 00:11:38.126 6150.302 - 6175.508: 2.4297% ( 41) 00:11:38.126 6175.508 - 6200.714: 2.7138% ( 38) 00:11:38.126 6200.714 - 6225.920: 3.0353% ( 43) 00:11:38.126 6225.920 - 6251.126: 3.3717% ( 45) 00:11:38.126 6251.126 - 6276.332: 3.7306% ( 48) 00:11:38.126 6276.332 - 6301.538: 4.1193% ( 52) 00:11:38.126 6301.538 - 6326.745: 4.5604% ( 59) 00:11:38.126 6326.745 - 6351.951: 5.0464% ( 65) 00:11:38.126 6351.951 - 6377.157: 5.5024% ( 61) 00:11:38.126 6377.157 - 6402.363: 6.0481% ( 73) 00:11:38.126 6402.363 - 6427.569: 6.5939% ( 73) 00:11:38.126 6427.569 - 6452.775: 7.2443% ( 87) 00:11:38.126 6452.775 - 6503.188: 8.5153% ( 170) 00:11:38.126 6503.188 - 6553.600: 9.8385% ( 177) 00:11:38.126 6553.600 - 6604.012: 11.2216% ( 185) 00:11:38.126 6604.012 - 6654.425: 12.7841% ( 209) 00:11:38.126 6654.425 - 6704.837: 14.2718% ( 199) 00:11:38.126 6704.837 - 6755.249: 15.7969% ( 204) 00:11:38.126 6755.249 - 6805.662: 17.3594% ( 209) 00:11:38.126 6805.662 - 6856.074: 19.0042% ( 220) 00:11:38.126 6856.074 - 6906.486: 20.6788% ( 224) 00:11:38.126 6906.486 - 6956.898: 22.2787% ( 214) 00:11:38.126 6956.898 - 7007.311: 23.8487% ( 210) 00:11:38.126 7007.311 - 7057.723: 25.2916% ( 193) 00:11:38.126 7057.723 - 7108.135: 26.7344% ( 193) 00:11:38.126 7108.135 - 7158.548: 28.1549% ( 190) 00:11:38.126 7158.548 - 7208.960: 29.4109% ( 168) 00:11:38.126 7208.960 - 7259.372: 30.5846% ( 157) 00:11:38.126 7259.372 - 7309.785: 31.6313% ( 140) 00:11:38.126 7309.785 - 7360.197: 32.5807% ( 127) 00:11:38.126 7360.197 - 7410.609: 33.5526% ( 130) 00:11:38.126 7410.609 - 7461.022: 34.3974% ( 113) 00:11:38.126 7461.022 - 7511.434: 35.1600% ( 102) 00:11:38.126 7511.434 - 7561.846: 35.7431% ( 78) 00:11:38.126 7561.846 - 7612.258: 36.2964% ( 74) 00:11:38.126 7612.258 - 7662.671: 36.8047% ( 68) 00:11:38.126 7662.671 - 7713.083: 37.2757% ( 63) 00:11:38.126 7713.083 - 7763.495: 37.6794% ( 54) 00:11:38.126 7763.495 - 7813.908: 38.0458% ( 49) 00:11:38.126 7813.908 - 7864.320: 38.4644% ( 56) 00:11:38.126 7864.320 - 7914.732: 38.8158% ( 47) 00:11:38.126 7914.732 - 7965.145: 39.0999% ( 38) 00:11:38.126 7965.145 - 8015.557: 39.4363% ( 45) 00:11:38.126 8015.557 - 8065.969: 39.7428% ( 41) 00:11:38.126 8065.969 - 8116.382: 40.0194% ( 37) 00:11:38.126 8116.382 - 8166.794: 40.3334% ( 42) 00:11:38.126 8166.794 - 8217.206: 40.5951% ( 35) 00:11:38.126 8217.206 - 8267.618: 40.8867% ( 39) 00:11:38.126 8267.618 - 8318.031: 41.1184% ( 31) 00:11:38.126 8318.031 - 8368.443: 41.3128% ( 26) 00:11:38.126 8368.443 - 8418.855: 41.5147% ( 27) 00:11:38.126 8418.855 - 8469.268: 41.6791% ( 22) 00:11:38.126 8469.268 - 8519.680: 41.8959% ( 29) 00:11:38.126 8519.680 - 8570.092: 42.0604% ( 22) 00:11:38.126 8570.092 - 8620.505: 42.2697% ( 28) 00:11:38.126 8620.505 - 8670.917: 42.4940% ( 30) 00:11:38.126 8670.917 - 8721.329: 42.7333% ( 32) 00:11:38.126 8721.329 - 8771.742: 42.9650% ( 31) 00:11:38.126 8771.742 - 8822.154: 43.2342% ( 36) 00:11:38.126 8822.154 - 8872.566: 43.4659% ( 31) 00:11:38.126 8872.566 - 8922.978: 43.7799% ( 42) 00:11:38.126 8922.978 - 8973.391: 44.0640% ( 38) 00:11:38.126 8973.391 - 9023.803: 44.3780% ( 42) 00:11:38.126 9023.803 - 9074.215: 44.6621% ( 38) 00:11:38.126 9074.215 - 9124.628: 44.9985% ( 45) 00:11:38.126 9124.628 - 9175.040: 45.3050% ( 41) 00:11:38.126 9175.040 - 9225.452: 45.6115% ( 41) 00:11:38.126 9225.452 - 9275.865: 45.9704% ( 48) 00:11:38.126 9275.865 - 9326.277: 46.2694% ( 40) 00:11:38.126 9326.277 - 9376.689: 46.6582% ( 52) 00:11:38.126 9376.689 - 9427.102: 47.0769% ( 56) 00:11:38.126 9427.102 - 9477.514: 47.5478% ( 63) 00:11:38.126 9477.514 - 9527.926: 48.0413% ( 66) 00:11:38.126 9527.926 - 9578.338: 48.5721% ( 71) 00:11:38.126 9578.338 - 9628.751: 49.1477% ( 77) 00:11:38.126 9628.751 - 9679.163: 49.6860% ( 72) 00:11:38.126 9679.163 - 9729.575: 50.3364% ( 87) 00:11:38.126 9729.575 - 9779.988: 50.9644% ( 84) 00:11:38.126 9779.988 - 9830.400: 51.5999% ( 85) 00:11:38.126 9830.400 - 9880.812: 52.2054% ( 81) 00:11:38.126 9880.812 - 9931.225: 52.8858% ( 91) 00:11:38.126 9931.225 - 9981.637: 53.5960% ( 95) 00:11:38.126 9981.637 - 10032.049: 54.4632% ( 116) 00:11:38.126 10032.049 - 10082.462: 55.3230% ( 115) 00:11:38.126 10082.462 - 10132.874: 56.1603% ( 112) 00:11:38.126 10132.874 - 10183.286: 57.1770% ( 136) 00:11:38.126 10183.286 - 10233.698: 58.1863% ( 135) 00:11:38.126 10233.698 - 10284.111: 59.3077% ( 150) 00:11:38.126 10284.111 - 10334.523: 60.4142% ( 148) 00:11:38.126 10334.523 - 10384.935: 61.4907% ( 144) 00:11:38.126 10384.935 - 10435.348: 62.5822% ( 146) 00:11:38.126 10435.348 - 10485.760: 63.7261% ( 153) 00:11:38.126 10485.760 - 10536.172: 64.9073% ( 158) 00:11:38.126 10536.172 - 10586.585: 65.9988% ( 146) 00:11:38.126 10586.585 - 10636.997: 66.9931% ( 133) 00:11:38.126 10636.997 - 10687.409: 68.0846% ( 146) 00:11:38.126 10687.409 - 10737.822: 69.1238% ( 139) 00:11:38.126 10737.822 - 10788.234: 70.1555% ( 138) 00:11:38.126 10788.234 - 10838.646: 71.0825% ( 124) 00:11:38.126 10838.646 - 10889.058: 71.9124% ( 111) 00:11:38.126 10889.058 - 10939.471: 72.7048% ( 106) 00:11:38.126 10939.471 - 10989.883: 73.5347% ( 111) 00:11:38.126 10989.883 - 11040.295: 74.3571% ( 110) 00:11:38.126 11040.295 - 11090.708: 75.1794% ( 110) 00:11:38.126 11090.708 - 11141.120: 75.9196% ( 99) 00:11:38.126 11141.120 - 11191.532: 76.6074% ( 92) 00:11:38.126 11191.532 - 11241.945: 77.2877% ( 91) 00:11:38.126 11241.945 - 11292.357: 77.9007% ( 82) 00:11:38.126 11292.357 - 11342.769: 78.5885% ( 92) 00:11:38.126 11342.769 - 11393.182: 79.2389% ( 87) 00:11:38.126 11393.182 - 11443.594: 79.9641% ( 97) 00:11:38.126 11443.594 - 11494.006: 80.6669% ( 94) 00:11:38.126 11494.006 - 11544.418: 81.2575% ( 79) 00:11:38.126 11544.418 - 11594.831: 81.8481% ( 79) 00:11:38.127 11594.831 - 11645.243: 82.3864% ( 72) 00:11:38.127 11645.243 - 11695.655: 82.9396% ( 74) 00:11:38.127 11695.655 - 11746.068: 83.5302% ( 79) 00:11:38.127 11746.068 - 11796.480: 84.1358% ( 81) 00:11:38.127 11796.480 - 11846.892: 84.7862% ( 87) 00:11:38.127 11846.892 - 11897.305: 85.4142% ( 84) 00:11:38.127 11897.305 - 11947.717: 85.9599% ( 73) 00:11:38.127 11947.717 - 11998.129: 86.4833% ( 70) 00:11:38.127 11998.129 - 12048.542: 86.9019% ( 56) 00:11:38.127 12048.542 - 12098.954: 87.3131% ( 55) 00:11:38.127 12098.954 - 12149.366: 87.7392% ( 57) 00:11:38.127 12149.366 - 12199.778: 88.1130% ( 50) 00:11:38.127 12199.778 - 12250.191: 88.4868% ( 50) 00:11:38.127 12250.191 - 12300.603: 88.8382% ( 47) 00:11:38.127 12300.603 - 12351.015: 89.1896% ( 47) 00:11:38.127 12351.015 - 12401.428: 89.5410% ( 47) 00:11:38.127 12401.428 - 12451.840: 89.9222% ( 51) 00:11:38.127 12451.840 - 12502.252: 90.1914% ( 36) 00:11:38.127 12502.252 - 12552.665: 90.4306% ( 32) 00:11:38.127 12552.665 - 12603.077: 90.6624% ( 31) 00:11:38.127 12603.077 - 12653.489: 90.9166% ( 34) 00:11:38.127 12653.489 - 12703.902: 91.0736% ( 21) 00:11:38.127 12703.902 - 12754.314: 91.2829% ( 28) 00:11:38.127 12754.314 - 12804.726: 91.4847% ( 27) 00:11:38.127 12804.726 - 12855.138: 91.7165% ( 31) 00:11:38.127 12855.138 - 12905.551: 91.9483% ( 31) 00:11:38.127 12905.551 - 13006.375: 92.4043% ( 61) 00:11:38.127 13006.375 - 13107.200: 92.8828% ( 64) 00:11:38.127 13107.200 - 13208.025: 93.3388% ( 61) 00:11:38.127 13208.025 - 13308.849: 93.7799% ( 59) 00:11:38.127 13308.849 - 13409.674: 94.1462% ( 49) 00:11:38.127 13409.674 - 13510.498: 94.4752% ( 44) 00:11:38.127 13510.498 - 13611.323: 94.7742% ( 40) 00:11:38.127 13611.323 - 13712.148: 95.0508% ( 37) 00:11:38.127 13712.148 - 13812.972: 95.2452% ( 26) 00:11:38.127 13812.972 - 13913.797: 95.4695% ( 30) 00:11:38.127 13913.797 - 14014.622: 95.6340% ( 22) 00:11:38.127 14014.622 - 14115.446: 95.7835% ( 20) 00:11:38.127 14115.446 - 14216.271: 95.9106% ( 17) 00:11:38.127 14216.271 - 14317.095: 96.0526% ( 19) 00:11:38.127 14317.095 - 14417.920: 96.1797% ( 17) 00:11:38.127 14417.920 - 14518.745: 96.3143% ( 18) 00:11:38.127 14518.745 - 14619.569: 96.4190% ( 14) 00:11:38.127 14619.569 - 14720.394: 96.5087% ( 12) 00:11:38.127 14720.394 - 14821.218: 96.5760% ( 9) 00:11:38.127 14821.218 - 14922.043: 96.6133% ( 5) 00:11:38.127 14922.043 - 15022.868: 96.6507% ( 5) 00:11:38.127 15022.868 - 15123.692: 96.7479% ( 13) 00:11:38.127 15123.692 - 15224.517: 96.8301% ( 11) 00:11:38.127 15224.517 - 15325.342: 96.9348% ( 14) 00:11:38.127 15325.342 - 15426.166: 97.0245% ( 12) 00:11:38.127 15426.166 - 15526.991: 97.1142% ( 12) 00:11:38.127 15526.991 - 15627.815: 97.2189% ( 14) 00:11:38.127 15627.815 - 15728.640: 97.3086% ( 12) 00:11:38.127 15728.640 - 15829.465: 97.3983% ( 12) 00:11:38.127 15829.465 - 15930.289: 97.4955% ( 13) 00:11:38.127 15930.289 - 16031.114: 97.5927% ( 13) 00:11:38.127 16031.114 - 16131.938: 97.6077% ( 2) 00:11:38.127 16232.763 - 16333.588: 97.6151% ( 1) 00:11:38.127 16333.588 - 16434.412: 97.6525% ( 5) 00:11:38.127 16434.412 - 16535.237: 97.7497% ( 13) 00:11:38.127 16535.237 - 16636.062: 97.8469% ( 13) 00:11:38.127 16636.062 - 16736.886: 97.9516% ( 14) 00:11:38.127 16736.886 - 16837.711: 98.0562% ( 14) 00:11:38.127 16837.711 - 16938.535: 98.1758% ( 16) 00:11:38.127 16938.535 - 17039.360: 98.3104% ( 18) 00:11:38.127 17039.360 - 17140.185: 98.4450% ( 18) 00:11:38.127 17140.185 - 17241.009: 98.5870% ( 19) 00:11:38.127 17241.009 - 17341.834: 98.7291% ( 19) 00:11:38.127 17341.834 - 17442.658: 98.8038% ( 10) 00:11:38.127 17442.658 - 17543.483: 98.8487% ( 6) 00:11:38.127 17543.483 - 17644.308: 98.8861% ( 5) 00:11:38.127 17644.308 - 17745.132: 98.9309% ( 6) 00:11:38.127 17745.132 - 17845.957: 98.9758% ( 6) 00:11:38.127 17845.957 - 17946.782: 99.0206% ( 6) 00:11:38.127 17946.782 - 18047.606: 99.0431% ( 3) 00:11:38.127 18249.255 - 18350.080: 99.1103% ( 9) 00:11:38.127 18350.080 - 18450.905: 99.1403% ( 4) 00:11:38.127 18450.905 - 18551.729: 99.1552% ( 2) 00:11:38.127 18551.729 - 18652.554: 99.1702% ( 2) 00:11:38.127 18652.554 - 18753.378: 99.2001% ( 4) 00:11:38.127 18753.378 - 18854.203: 99.2225% ( 3) 00:11:38.127 18854.203 - 18955.028: 99.2524% ( 4) 00:11:38.127 18955.028 - 19055.852: 99.2972% ( 6) 00:11:38.127 19055.852 - 19156.677: 99.3197% ( 3) 00:11:38.127 19156.677 - 19257.502: 99.3496% ( 4) 00:11:38.127 19257.502 - 19358.326: 99.3795% ( 4) 00:11:38.127 19358.326 - 19459.151: 99.3944% ( 2) 00:11:38.127 19459.151 - 19559.975: 99.4169% ( 3) 00:11:38.127 19559.975 - 19660.800: 99.4468% ( 4) 00:11:38.127 19660.800 - 19761.625: 99.4692% ( 3) 00:11:38.127 19761.625 - 19862.449: 99.4991% ( 4) 00:11:38.127 19862.449 - 19963.274: 99.5215% ( 3) 00:11:38.127 26214.400 - 26416.049: 99.5290% ( 1) 00:11:38.127 26416.049 - 26617.698: 99.5813% ( 7) 00:11:38.127 26617.698 - 26819.348: 99.6337% ( 7) 00:11:38.127 26819.348 - 27020.997: 99.6860% ( 7) 00:11:38.127 27020.997 - 27222.646: 99.7383% ( 7) 00:11:38.127 27222.646 - 27424.295: 99.7907% ( 7) 00:11:38.127 27424.295 - 27625.945: 99.8430% ( 7) 00:11:38.127 27625.945 - 27827.594: 99.8953% ( 7) 00:11:38.127 27827.594 - 28029.243: 99.9477% ( 7) 00:11:38.127 28029.243 - 28230.892: 100.0000% ( 7) 00:11:38.127 00:11:38.127 06:37:50 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:11:39.516 Initializing NVMe Controllers 00:11:39.516 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:39.516 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:39.516 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:39.516 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:39.516 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:11:39.516 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:11:39.516 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:11:39.516 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:11:39.516 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:11:39.516 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:11:39.516 Initialization complete. Launching workers. 00:11:39.516 ======================================================== 00:11:39.516 Latency(us) 00:11:39.516 Device Information : IOPS MiB/s Average min max 00:11:39.516 PCIE (0000:00:10.0) NSID 1 from core 0: 11982.18 140.42 10709.81 6601.63 38055.35 00:11:39.516 PCIE (0000:00:11.0) NSID 1 from core 0: 11982.18 140.42 10696.86 6696.26 36632.49 00:11:39.516 PCIE (0000:00:13.0) NSID 1 from core 0: 11982.18 140.42 10683.52 6640.30 35621.98 00:11:39.516 PCIE (0000:00:12.0) NSID 1 from core 0: 11982.18 140.42 10670.24 6739.65 33846.95 00:11:39.516 PCIE (0000:00:12.0) NSID 2 from core 0: 11982.18 140.42 10657.19 6623.88 32151.40 00:11:39.516 PCIE (0000:00:12.0) NSID 3 from core 0: 11982.18 140.42 10644.19 6786.69 30477.28 00:11:39.516 ======================================================== 00:11:39.516 Total : 71893.07 842.50 10676.97 6601.63 38055.35 00:11:39.516 00:11:39.516 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:11:39.516 ================================================================================= 00:11:39.516 1.00000% : 6906.486us 00:11:39.516 10.00000% : 8721.329us 00:11:39.516 25.00000% : 9427.102us 00:11:39.516 50.00000% : 10284.111us 00:11:39.516 75.00000% : 11544.418us 00:11:39.516 90.00000% : 13107.200us 00:11:39.516 95.00000% : 14115.446us 00:11:39.516 98.00000% : 15123.692us 00:11:39.516 99.00000% : 27020.997us 00:11:39.516 99.50000% : 36095.212us 00:11:39.516 99.90000% : 37708.406us 00:11:39.516 99.99000% : 38111.705us 00:11:39.516 99.99900% : 38111.705us 00:11:39.516 99.99990% : 38111.705us 00:11:39.516 99.99999% : 38111.705us 00:11:39.516 00:11:39.516 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:11:39.516 ================================================================================= 00:11:39.516 1.00000% : 7057.723us 00:11:39.516 10.00000% : 8822.154us 00:11:39.516 25.00000% : 9427.102us 00:11:39.517 50.00000% : 10233.698us 00:11:39.517 75.00000% : 11494.006us 00:11:39.517 90.00000% : 13006.375us 00:11:39.517 95.00000% : 14216.271us 00:11:39.517 98.00000% : 15123.692us 00:11:39.517 99.00000% : 26416.049us 00:11:39.517 99.50000% : 34885.317us 00:11:39.517 99.90000% : 36296.862us 00:11:39.517 99.99000% : 36700.160us 00:11:39.517 99.99900% : 36700.160us 00:11:39.517 99.99990% : 36700.160us 00:11:39.517 99.99999% : 36700.160us 00:11:39.517 00:11:39.517 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:11:39.517 ================================================================================= 00:11:39.517 1.00000% : 7057.723us 00:11:39.517 10.00000% : 8822.154us 00:11:39.517 25.00000% : 9427.102us 00:11:39.517 50.00000% : 10233.698us 00:11:39.517 75.00000% : 11594.831us 00:11:39.517 90.00000% : 12855.138us 00:11:39.517 95.00000% : 14115.446us 00:11:39.517 98.00000% : 15022.868us 00:11:39.517 99.00000% : 25609.452us 00:11:39.517 99.50000% : 34078.720us 00:11:39.517 99.90000% : 35490.265us 00:11:39.517 99.99000% : 35691.914us 00:11:39.517 99.99900% : 35691.914us 00:11:39.517 99.99990% : 35691.914us 00:11:39.517 99.99999% : 35691.914us 00:11:39.517 00:11:39.517 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:11:39.517 ================================================================================= 00:11:39.517 1.00000% : 7057.723us 00:11:39.517 10.00000% : 8922.978us 00:11:39.517 25.00000% : 9427.102us 00:11:39.517 50.00000% : 10284.111us 00:11:39.517 75.00000% : 11544.418us 00:11:39.517 90.00000% : 13006.375us 00:11:39.517 95.00000% : 14216.271us 00:11:39.517 98.00000% : 14821.218us 00:11:39.517 99.00000% : 25004.505us 00:11:39.517 99.50000% : 31457.280us 00:11:39.517 99.90000% : 33675.422us 00:11:39.517 99.99000% : 33877.071us 00:11:39.517 99.99900% : 33877.071us 00:11:39.517 99.99990% : 33877.071us 00:11:39.517 99.99999% : 33877.071us 00:11:39.517 00:11:39.517 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:11:39.517 ================================================================================= 00:11:39.517 1.00000% : 7108.135us 00:11:39.517 10.00000% : 8872.566us 00:11:39.517 25.00000% : 9427.102us 00:11:39.517 50.00000% : 10284.111us 00:11:39.517 75.00000% : 11494.006us 00:11:39.517 90.00000% : 13006.375us 00:11:39.517 95.00000% : 13913.797us 00:11:39.517 98.00000% : 14922.043us 00:11:39.517 99.00000% : 24097.083us 00:11:39.517 99.50000% : 29844.086us 00:11:39.517 99.90000% : 32062.228us 00:11:39.517 99.99000% : 32263.877us 00:11:39.517 99.99900% : 32263.877us 00:11:39.517 99.99990% : 32263.877us 00:11:39.517 99.99999% : 32263.877us 00:11:39.517 00:11:39.517 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:11:39.517 ================================================================================= 00:11:39.517 1.00000% : 7108.135us 00:11:39.517 10.00000% : 8822.154us 00:11:39.517 25.00000% : 9427.102us 00:11:39.517 50.00000% : 10284.111us 00:11:39.517 75.00000% : 11544.418us 00:11:39.517 90.00000% : 13006.375us 00:11:39.517 95.00000% : 14014.622us 00:11:39.517 98.00000% : 14922.043us 00:11:39.517 99.00000% : 22786.363us 00:11:39.517 99.50000% : 28432.542us 00:11:39.517 99.90000% : 30247.385us 00:11:39.517 99.99000% : 30650.683us 00:11:39.517 99.99900% : 30650.683us 00:11:39.517 99.99990% : 30650.683us 00:11:39.517 99.99999% : 30650.683us 00:11:39.517 00:11:39.517 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:11:39.517 ============================================================================== 00:11:39.517 Range in us Cumulative IO count 00:11:39.517 6553.600 - 6604.012: 0.0083% ( 1) 00:11:39.517 6604.012 - 6654.425: 0.0249% ( 2) 00:11:39.517 6654.425 - 6704.837: 0.0665% ( 5) 00:11:39.517 6704.837 - 6755.249: 0.1496% ( 10) 00:11:39.517 6755.249 - 6805.662: 0.2244% ( 9) 00:11:39.517 6805.662 - 6856.074: 0.6898% ( 56) 00:11:39.517 6856.074 - 6906.486: 1.0805% ( 47) 00:11:39.517 6906.486 - 6956.898: 1.2550% ( 21) 00:11:39.517 6956.898 - 7007.311: 1.4794% ( 27) 00:11:39.517 7007.311 - 7057.723: 1.8118% ( 40) 00:11:39.517 7057.723 - 7108.135: 2.0529% ( 29) 00:11:39.517 7108.135 - 7158.548: 2.2856% ( 28) 00:11:39.517 7158.548 - 7208.960: 2.4934% ( 25) 00:11:39.517 7208.960 - 7259.372: 2.7344% ( 29) 00:11:39.517 7259.372 - 7309.785: 3.1915% ( 55) 00:11:39.517 7309.785 - 7360.197: 3.5987% ( 49) 00:11:39.517 7360.197 - 7410.609: 3.8647% ( 32) 00:11:39.517 7410.609 - 7461.022: 4.2719% ( 49) 00:11:39.517 7461.022 - 7511.434: 4.6875% ( 50) 00:11:39.517 7511.434 - 7561.846: 4.9285% ( 29) 00:11:39.517 7561.846 - 7612.258: 5.2111% ( 34) 00:11:39.517 7612.258 - 7662.671: 5.4771% ( 32) 00:11:39.517 7662.671 - 7713.083: 5.7430% ( 32) 00:11:39.517 7713.083 - 7763.495: 5.8843% ( 17) 00:11:39.517 7763.495 - 7813.908: 6.0588% ( 21) 00:11:39.517 7813.908 - 7864.320: 6.3248% ( 32) 00:11:39.517 7864.320 - 7914.732: 6.5575% ( 28) 00:11:39.517 7914.732 - 7965.145: 6.7653% ( 25) 00:11:39.517 7965.145 - 8015.557: 7.0229% ( 31) 00:11:39.517 8015.557 - 8065.969: 7.2224% ( 24) 00:11:39.517 8065.969 - 8116.382: 7.3221% ( 12) 00:11:39.517 8116.382 - 8166.794: 7.4302% ( 13) 00:11:39.517 8166.794 - 8217.206: 7.5382% ( 13) 00:11:39.517 8217.206 - 8267.618: 7.6546% ( 14) 00:11:39.517 8267.618 - 8318.031: 7.9621% ( 37) 00:11:39.517 8318.031 - 8368.443: 8.2197% ( 31) 00:11:39.517 8368.443 - 8418.855: 8.3444% ( 15) 00:11:39.517 8418.855 - 8469.268: 8.4525% ( 13) 00:11:39.517 8469.268 - 8519.680: 8.6519% ( 24) 00:11:39.517 8519.680 - 8570.092: 8.8015% ( 18) 00:11:39.517 8570.092 - 8620.505: 9.0426% ( 29) 00:11:39.517 8620.505 - 8670.917: 9.6243% ( 70) 00:11:39.517 8670.917 - 8721.329: 10.2311% ( 73) 00:11:39.517 8721.329 - 8771.742: 10.9874% ( 91) 00:11:39.517 8771.742 - 8822.154: 11.6938% ( 85) 00:11:39.517 8822.154 - 8872.566: 12.5997% ( 109) 00:11:39.517 8872.566 - 8922.978: 13.4973% ( 108) 00:11:39.517 8922.978 - 8973.391: 14.2786% ( 94) 00:11:39.517 8973.391 - 9023.803: 15.2178% ( 113) 00:11:39.517 9023.803 - 9074.215: 16.1818% ( 116) 00:11:39.517 9074.215 - 9124.628: 17.7028% ( 183) 00:11:39.517 9124.628 - 9175.040: 19.1572% ( 175) 00:11:39.517 9175.040 - 9225.452: 20.3457% ( 143) 00:11:39.517 9225.452 - 9275.865: 21.6672% ( 159) 00:11:39.517 9275.865 - 9326.277: 22.9970% ( 160) 00:11:39.517 9326.277 - 9376.689: 24.3434% ( 162) 00:11:39.517 9376.689 - 9427.102: 25.9142% ( 189) 00:11:39.517 9427.102 - 9477.514: 27.6679% ( 211) 00:11:39.517 9477.514 - 9527.926: 29.3634% ( 204) 00:11:39.517 9527.926 - 9578.338: 30.8261% ( 176) 00:11:39.517 9578.338 - 9628.751: 32.3471% ( 183) 00:11:39.517 9628.751 - 9679.163: 33.6686% ( 159) 00:11:39.517 9679.163 - 9729.575: 34.9318% ( 152) 00:11:39.517 9729.575 - 9779.988: 36.5608% ( 196) 00:11:39.517 9779.988 - 9830.400: 38.0984% ( 185) 00:11:39.517 9830.400 - 9880.812: 39.9186% ( 219) 00:11:39.517 9880.812 - 9931.225: 41.6140% ( 204) 00:11:39.517 9931.225 - 9981.637: 43.2264% ( 194) 00:11:39.517 9981.637 - 10032.049: 44.3318% ( 133) 00:11:39.517 10032.049 - 10082.462: 45.4953% ( 140) 00:11:39.517 10082.462 - 10132.874: 46.6755% ( 142) 00:11:39.517 10132.874 - 10183.286: 47.9305% ( 151) 00:11:39.517 10183.286 - 10233.698: 49.1855% ( 151) 00:11:39.517 10233.698 - 10284.111: 50.4820% ( 156) 00:11:39.517 10284.111 - 10334.523: 51.8451% ( 164) 00:11:39.517 10334.523 - 10384.935: 53.0834% ( 149) 00:11:39.517 10384.935 - 10435.348: 54.3634% ( 154) 00:11:39.517 10435.348 - 10485.760: 55.4023% ( 125) 00:11:39.517 10485.760 - 10536.172: 56.5326% ( 136) 00:11:39.517 10536.172 - 10586.585: 57.4884% ( 115) 00:11:39.517 10586.585 - 10636.997: 58.7434% ( 151) 00:11:39.517 10636.997 - 10687.409: 60.0066% ( 152) 00:11:39.517 10687.409 - 10737.822: 61.2616% ( 151) 00:11:39.517 10737.822 - 10788.234: 62.4917% ( 148) 00:11:39.517 10788.234 - 10838.646: 63.6469% ( 139) 00:11:39.517 10838.646 - 10889.058: 64.6692% ( 123) 00:11:39.517 10889.058 - 10939.471: 65.6084% ( 113) 00:11:39.517 10939.471 - 10989.883: 66.6556% ( 126) 00:11:39.517 10989.883 - 11040.295: 67.5033% ( 102) 00:11:39.517 11040.295 - 11090.708: 68.4092% ( 109) 00:11:39.517 11090.708 - 11141.120: 69.1240% ( 86) 00:11:39.517 11141.120 - 11191.532: 69.9967% ( 105) 00:11:39.517 11191.532 - 11241.945: 70.8444% ( 102) 00:11:39.517 11241.945 - 11292.357: 71.4262% ( 70) 00:11:39.518 11292.357 - 11342.769: 72.3155% ( 107) 00:11:39.518 11342.769 - 11393.182: 73.1965% ( 106) 00:11:39.518 11393.182 - 11443.594: 73.8946% ( 84) 00:11:39.518 11443.594 - 11494.006: 74.7922% ( 108) 00:11:39.518 11494.006 - 11544.418: 75.5319% ( 89) 00:11:39.518 11544.418 - 11594.831: 76.4794% ( 114) 00:11:39.518 11594.831 - 11645.243: 77.2274% ( 90) 00:11:39.518 11645.243 - 11695.655: 77.9505% ( 87) 00:11:39.518 11695.655 - 11746.068: 78.6735% ( 87) 00:11:39.518 11746.068 - 11796.480: 79.4631% ( 95) 00:11:39.518 11796.480 - 11846.892: 80.2277% ( 92) 00:11:39.518 11846.892 - 11897.305: 80.7596% ( 64) 00:11:39.518 11897.305 - 11947.717: 81.1835% ( 51) 00:11:39.518 11947.717 - 11998.129: 81.6240% ( 53) 00:11:39.518 11998.129 - 12048.542: 82.0312% ( 49) 00:11:39.518 12048.542 - 12098.954: 82.4967% ( 56) 00:11:39.518 12098.954 - 12149.366: 82.9039% ( 49) 00:11:39.518 12149.366 - 12199.778: 83.3610% ( 55) 00:11:39.518 12199.778 - 12250.191: 83.7350% ( 45) 00:11:39.518 12250.191 - 12300.603: 84.2171% ( 58) 00:11:39.518 12300.603 - 12351.015: 84.6659% ( 54) 00:11:39.518 12351.015 - 12401.428: 85.0898% ( 51) 00:11:39.518 12401.428 - 12451.840: 85.6051% ( 62) 00:11:39.518 12451.840 - 12502.252: 86.0705% ( 56) 00:11:39.518 12502.252 - 12552.665: 86.5608% ( 59) 00:11:39.518 12552.665 - 12603.077: 86.9016% ( 41) 00:11:39.518 12603.077 - 12653.489: 87.3255% ( 51) 00:11:39.518 12653.489 - 12703.902: 87.6745% ( 42) 00:11:39.518 12703.902 - 12754.314: 87.9987% ( 39) 00:11:39.518 12754.314 - 12804.726: 88.3976% ( 48) 00:11:39.518 12804.726 - 12855.138: 88.8381% ( 53) 00:11:39.518 12855.138 - 12905.551: 89.1622% ( 39) 00:11:39.518 12905.551 - 13006.375: 89.6858% ( 63) 00:11:39.518 13006.375 - 13107.200: 90.4338% ( 90) 00:11:39.518 13107.200 - 13208.025: 91.0655% ( 76) 00:11:39.518 13208.025 - 13308.849: 91.7138% ( 78) 00:11:39.518 13308.849 - 13409.674: 92.3371% ( 75) 00:11:39.518 13409.674 - 13510.498: 92.8358% ( 60) 00:11:39.518 13510.498 - 13611.323: 93.2347% ( 48) 00:11:39.518 13611.323 - 13712.148: 93.6503% ( 50) 00:11:39.518 13712.148 - 13812.972: 94.0076% ( 43) 00:11:39.518 13812.972 - 13913.797: 94.3983% ( 47) 00:11:39.518 13913.797 - 14014.622: 94.8554% ( 55) 00:11:39.518 14014.622 - 14115.446: 95.2128% ( 43) 00:11:39.518 14115.446 - 14216.271: 95.7197% ( 61) 00:11:39.518 14216.271 - 14317.095: 96.0854% ( 44) 00:11:39.518 14317.095 - 14417.920: 96.3930% ( 37) 00:11:39.518 14417.920 - 14518.745: 96.7337% ( 41) 00:11:39.518 14518.745 - 14619.569: 97.0828% ( 42) 00:11:39.518 14619.569 - 14720.394: 97.3487% ( 32) 00:11:39.518 14720.394 - 14821.218: 97.5648% ( 26) 00:11:39.518 14821.218 - 14922.043: 97.7643% ( 24) 00:11:39.518 14922.043 - 15022.868: 97.9721% ( 25) 00:11:39.518 15022.868 - 15123.692: 98.1799% ( 25) 00:11:39.518 15123.692 - 15224.517: 98.3793% ( 24) 00:11:39.518 15224.517 - 15325.342: 98.5372% ( 19) 00:11:39.518 15325.342 - 15426.166: 98.7201% ( 22) 00:11:39.518 15426.166 - 15526.991: 98.8614% ( 17) 00:11:39.518 15526.991 - 15627.815: 98.9029% ( 5) 00:11:39.518 15627.815 - 15728.640: 98.9362% ( 4) 00:11:39.518 26416.049 - 26617.698: 98.9611% ( 3) 00:11:39.518 26617.698 - 26819.348: 98.9943% ( 4) 00:11:39.518 26819.348 - 27020.997: 99.0442% ( 6) 00:11:39.518 27020.997 - 27222.646: 99.0775% ( 4) 00:11:39.518 27222.646 - 27424.295: 99.1273% ( 6) 00:11:39.518 27424.295 - 27625.945: 99.1855% ( 7) 00:11:39.518 27625.945 - 27827.594: 99.2271% ( 5) 00:11:39.518 27827.594 - 28029.243: 99.2852% ( 7) 00:11:39.518 28029.243 - 28230.892: 99.3351% ( 6) 00:11:39.518 28230.892 - 28432.542: 99.3850% ( 6) 00:11:39.518 28432.542 - 28634.191: 99.4598% ( 9) 00:11:39.518 28634.191 - 28835.840: 99.4681% ( 1) 00:11:39.518 35691.914 - 35893.563: 99.4930% ( 3) 00:11:39.518 35893.563 - 36095.212: 99.5429% ( 6) 00:11:39.518 36095.212 - 36296.862: 99.6011% ( 7) 00:11:39.518 36296.862 - 36498.511: 99.6592% ( 7) 00:11:39.518 36498.511 - 36700.160: 99.7008% ( 5) 00:11:39.518 36700.160 - 36901.809: 99.7340% ( 4) 00:11:39.518 36901.809 - 37103.458: 99.7756% ( 5) 00:11:39.518 37103.458 - 37305.108: 99.8088% ( 4) 00:11:39.518 37305.108 - 37506.757: 99.8504% ( 5) 00:11:39.518 37506.757 - 37708.406: 99.9086% ( 7) 00:11:39.518 37708.406 - 37910.055: 99.9668% ( 7) 00:11:39.518 37910.055 - 38111.705: 100.0000% ( 4) 00:11:39.518 00:11:39.518 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:11:39.518 ============================================================================== 00:11:39.518 Range in us Cumulative IO count 00:11:39.518 6654.425 - 6704.837: 0.0083% ( 1) 00:11:39.518 6755.249 - 6805.662: 0.0332% ( 3) 00:11:39.518 6805.662 - 6856.074: 0.0499% ( 2) 00:11:39.518 6856.074 - 6906.486: 0.1828% ( 16) 00:11:39.518 6906.486 - 6956.898: 0.5485% ( 44) 00:11:39.518 6956.898 - 7007.311: 0.8311% ( 34) 00:11:39.518 7007.311 - 7057.723: 1.2384% ( 49) 00:11:39.518 7057.723 - 7108.135: 1.8451% ( 73) 00:11:39.518 7108.135 - 7158.548: 2.2939% ( 54) 00:11:39.518 7158.548 - 7208.960: 2.5349% ( 29) 00:11:39.518 7208.960 - 7259.372: 2.7593% ( 27) 00:11:39.518 7259.372 - 7309.785: 3.1666% ( 49) 00:11:39.518 7309.785 - 7360.197: 3.4491% ( 34) 00:11:39.518 7360.197 - 7410.609: 3.7650% ( 38) 00:11:39.518 7410.609 - 7461.022: 4.3052% ( 65) 00:11:39.518 7461.022 - 7511.434: 4.6376% ( 40) 00:11:39.518 7511.434 - 7561.846: 5.0864% ( 54) 00:11:39.518 7561.846 - 7612.258: 5.5020% ( 50) 00:11:39.518 7612.258 - 7662.671: 5.7513% ( 30) 00:11:39.518 7662.671 - 7713.083: 6.0256% ( 33) 00:11:39.518 7713.083 - 7763.495: 6.2417% ( 26) 00:11:39.518 7763.495 - 7813.908: 6.6406% ( 48) 00:11:39.518 7813.908 - 7864.320: 6.8650% ( 27) 00:11:39.518 7864.320 - 7914.732: 7.0063% ( 17) 00:11:39.518 7914.732 - 7965.145: 7.1725% ( 20) 00:11:39.518 7965.145 - 8015.557: 7.3637% ( 23) 00:11:39.518 8015.557 - 8065.969: 7.5216% ( 19) 00:11:39.518 8065.969 - 8116.382: 7.7294% ( 25) 00:11:39.518 8116.382 - 8166.794: 7.8208% ( 11) 00:11:39.518 8166.794 - 8217.206: 7.9953% ( 21) 00:11:39.518 8217.206 - 8267.618: 8.1283% ( 16) 00:11:39.518 8267.618 - 8318.031: 8.1948% ( 8) 00:11:39.518 8318.031 - 8368.443: 8.2779% ( 10) 00:11:39.518 8368.443 - 8418.855: 8.3777% ( 12) 00:11:39.518 8418.855 - 8469.268: 8.4691% ( 11) 00:11:39.518 8469.268 - 8519.680: 8.5605% ( 11) 00:11:39.518 8519.680 - 8570.092: 8.7018% ( 17) 00:11:39.518 8570.092 - 8620.505: 8.8680% ( 20) 00:11:39.518 8620.505 - 8670.917: 9.1506% ( 34) 00:11:39.518 8670.917 - 8721.329: 9.4332% ( 34) 00:11:39.518 8721.329 - 8771.742: 9.7490% ( 38) 00:11:39.518 8771.742 - 8822.154: 10.2726% ( 63) 00:11:39.518 8822.154 - 8872.566: 10.8461% ( 69) 00:11:39.518 8872.566 - 8922.978: 11.5525% ( 85) 00:11:39.518 8922.978 - 8973.391: 12.4501% ( 108) 00:11:39.518 8973.391 - 9023.803: 13.5472% ( 132) 00:11:39.518 9023.803 - 9074.215: 14.6277% ( 130) 00:11:39.518 9074.215 - 9124.628: 15.9076% ( 154) 00:11:39.518 9124.628 - 9175.040: 17.4867% ( 190) 00:11:39.518 9175.040 - 9225.452: 19.0160% ( 184) 00:11:39.518 9225.452 - 9275.865: 20.5369% ( 183) 00:11:39.518 9275.865 - 9326.277: 21.8999% ( 164) 00:11:39.518 9326.277 - 9376.689: 23.6120% ( 206) 00:11:39.518 9376.689 - 9427.102: 25.5402% ( 232) 00:11:39.518 9427.102 - 9477.514: 27.4684% ( 232) 00:11:39.518 9477.514 - 9527.926: 29.4797% ( 242) 00:11:39.518 9527.926 - 9578.338: 31.5409% ( 248) 00:11:39.518 9578.338 - 9628.751: 33.3361% ( 216) 00:11:39.518 9628.751 - 9679.163: 34.8155% ( 178) 00:11:39.518 9679.163 - 9729.575: 36.2783% ( 176) 00:11:39.518 9729.575 - 9779.988: 37.6413% ( 164) 00:11:39.518 9779.988 - 9830.400: 38.9794% ( 161) 00:11:39.518 9830.400 - 9880.812: 40.2842% ( 157) 00:11:39.518 9880.812 - 9931.225: 41.7553% ( 177) 00:11:39.518 9931.225 - 9981.637: 43.3095% ( 187) 00:11:39.518 9981.637 - 10032.049: 44.7141% ( 169) 00:11:39.518 10032.049 - 10082.462: 46.0189% ( 157) 00:11:39.518 10082.462 - 10132.874: 47.4318% ( 170) 00:11:39.518 10132.874 - 10183.286: 48.8115% ( 166) 00:11:39.518 10183.286 - 10233.698: 50.1413% ( 160) 00:11:39.518 10233.698 - 10284.111: 51.4129% ( 153) 00:11:39.518 10284.111 - 10334.523: 52.5432% ( 136) 00:11:39.518 10334.523 - 10384.935: 53.7566% ( 146) 00:11:39.518 10384.935 - 10435.348: 54.7789% ( 123) 00:11:39.518 10435.348 - 10485.760: 55.8843% ( 133) 00:11:39.518 10485.760 - 10536.172: 57.0396% ( 139) 00:11:39.518 10536.172 - 10586.585: 58.2530% ( 146) 00:11:39.518 10586.585 - 10636.997: 59.3168% ( 128) 00:11:39.518 10636.997 - 10687.409: 60.4721% ( 139) 00:11:39.518 10687.409 - 10737.822: 61.6273% ( 139) 00:11:39.518 10737.822 - 10788.234: 62.5416% ( 110) 00:11:39.518 10788.234 - 10838.646: 63.4309% ( 107) 00:11:39.518 10838.646 - 10889.058: 64.3866% ( 115) 00:11:39.518 10889.058 - 10939.471: 65.2510% ( 104) 00:11:39.518 10939.471 - 10989.883: 66.1818% ( 112) 00:11:39.518 10989.883 - 11040.295: 67.1376% ( 115) 00:11:39.518 11040.295 - 11090.708: 67.9688% ( 100) 00:11:39.518 11090.708 - 11141.120: 69.0243% ( 127) 00:11:39.518 11141.120 - 11191.532: 69.9551% ( 112) 00:11:39.518 11191.532 - 11241.945: 70.9109% ( 115) 00:11:39.518 11241.945 - 11292.357: 71.8750% ( 116) 00:11:39.518 11292.357 - 11342.769: 72.9305% ( 127) 00:11:39.518 11342.769 - 11393.182: 73.9528% ( 123) 00:11:39.518 11393.182 - 11443.594: 74.8920% ( 113) 00:11:39.518 11443.594 - 11494.006: 75.6067% ( 86) 00:11:39.518 11494.006 - 11544.418: 76.1802% ( 69) 00:11:39.518 11544.418 - 11594.831: 76.6789% ( 60) 00:11:39.519 11594.831 - 11645.243: 77.2689% ( 71) 00:11:39.519 11645.243 - 11695.655: 77.9588% ( 83) 00:11:39.519 11695.655 - 11746.068: 78.6070% ( 78) 00:11:39.519 11746.068 - 11796.480: 79.1473% ( 65) 00:11:39.519 11796.480 - 11846.892: 79.8703% ( 87) 00:11:39.519 11846.892 - 11897.305: 80.5269% ( 79) 00:11:39.519 11897.305 - 11947.717: 81.1004% ( 69) 00:11:39.519 11947.717 - 11998.129: 81.7653% ( 80) 00:11:39.519 11998.129 - 12048.542: 82.3138% ( 66) 00:11:39.519 12048.542 - 12098.954: 82.8707% ( 67) 00:11:39.519 12098.954 - 12149.366: 83.4441% ( 69) 00:11:39.519 12149.366 - 12199.778: 84.0010% ( 67) 00:11:39.519 12199.778 - 12250.191: 84.4997% ( 60) 00:11:39.519 12250.191 - 12300.603: 85.0316% ( 64) 00:11:39.519 12300.603 - 12351.015: 85.6383% ( 73) 00:11:39.519 12351.015 - 12401.428: 86.1120% ( 57) 00:11:39.519 12401.428 - 12451.840: 86.5608% ( 54) 00:11:39.519 12451.840 - 12502.252: 86.9348% ( 45) 00:11:39.519 12502.252 - 12552.665: 87.2507% ( 38) 00:11:39.519 12552.665 - 12603.077: 87.5831% ( 40) 00:11:39.519 12603.077 - 12653.489: 87.9239% ( 41) 00:11:39.519 12653.489 - 12703.902: 88.2314% ( 37) 00:11:39.519 12703.902 - 12754.314: 88.4724% ( 29) 00:11:39.519 12754.314 - 12804.726: 88.7217% ( 30) 00:11:39.519 12804.726 - 12855.138: 89.0209% ( 36) 00:11:39.519 12855.138 - 12905.551: 89.3949% ( 45) 00:11:39.519 12905.551 - 13006.375: 90.0349% ( 77) 00:11:39.519 13006.375 - 13107.200: 90.7663% ( 88) 00:11:39.519 13107.200 - 13208.025: 91.4229% ( 79) 00:11:39.519 13208.025 - 13308.849: 91.8883% ( 56) 00:11:39.519 13308.849 - 13409.674: 92.2623% ( 45) 00:11:39.519 13409.674 - 13510.498: 92.6114% ( 42) 00:11:39.519 13510.498 - 13611.323: 93.0685% ( 55) 00:11:39.519 13611.323 - 13712.148: 93.5588% ( 59) 00:11:39.519 13712.148 - 13812.972: 93.9744% ( 50) 00:11:39.519 13812.972 - 13913.797: 94.3068% ( 40) 00:11:39.519 13913.797 - 14014.622: 94.5645% ( 31) 00:11:39.519 14014.622 - 14115.446: 94.8055% ( 29) 00:11:39.519 14115.446 - 14216.271: 95.2128% ( 49) 00:11:39.519 14216.271 - 14317.095: 95.7031% ( 59) 00:11:39.519 14317.095 - 14417.920: 96.1104% ( 49) 00:11:39.519 14417.920 - 14518.745: 96.5010% ( 47) 00:11:39.519 14518.745 - 14619.569: 96.8833% ( 46) 00:11:39.519 14619.569 - 14720.394: 97.2074% ( 39) 00:11:39.519 14720.394 - 14821.218: 97.5150% ( 37) 00:11:39.519 14821.218 - 14922.043: 97.7227% ( 25) 00:11:39.519 14922.043 - 15022.868: 97.9305% ( 25) 00:11:39.519 15022.868 - 15123.692: 98.1217% ( 23) 00:11:39.519 15123.692 - 15224.517: 98.4043% ( 34) 00:11:39.519 15224.517 - 15325.342: 98.5289% ( 15) 00:11:39.519 15325.342 - 15426.166: 98.6453% ( 14) 00:11:39.519 15426.166 - 15526.991: 98.7367% ( 11) 00:11:39.519 15526.991 - 15627.815: 98.7949% ( 7) 00:11:39.519 15627.815 - 15728.640: 98.8447% ( 6) 00:11:39.519 15728.640 - 15829.465: 98.9029% ( 7) 00:11:39.519 15829.465 - 15930.289: 98.9362% ( 4) 00:11:39.519 26012.751 - 26214.400: 98.9445% ( 1) 00:11:39.519 26214.400 - 26416.049: 99.0027% ( 7) 00:11:39.519 26416.049 - 26617.698: 99.0608% ( 7) 00:11:39.519 26617.698 - 26819.348: 99.1190% ( 7) 00:11:39.519 26819.348 - 27020.997: 99.1772% ( 7) 00:11:39.519 27020.997 - 27222.646: 99.2354% ( 7) 00:11:39.519 27222.646 - 27424.295: 99.2936% ( 7) 00:11:39.519 27424.295 - 27625.945: 99.3434% ( 6) 00:11:39.519 27625.945 - 27827.594: 99.4016% ( 7) 00:11:39.519 27827.594 - 28029.243: 99.4598% ( 7) 00:11:39.519 28029.243 - 28230.892: 99.4681% ( 1) 00:11:39.519 34482.018 - 34683.668: 99.4930% ( 3) 00:11:39.519 34683.668 - 34885.317: 99.5595% ( 8) 00:11:39.519 34885.317 - 35086.966: 99.6177% ( 7) 00:11:39.519 35086.966 - 35288.615: 99.6842% ( 8) 00:11:39.519 35288.615 - 35490.265: 99.7507% ( 8) 00:11:39.519 35490.265 - 35691.914: 99.8005% ( 6) 00:11:39.519 35691.914 - 35893.563: 99.8504% ( 6) 00:11:39.519 35893.563 - 36095.212: 99.8920% ( 5) 00:11:39.519 36095.212 - 36296.862: 99.9418% ( 6) 00:11:39.519 36296.862 - 36498.511: 99.9751% ( 4) 00:11:39.519 36498.511 - 36700.160: 100.0000% ( 3) 00:11:39.519 00:11:39.519 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:11:39.519 ============================================================================== 00:11:39.519 Range in us Cumulative IO count 00:11:39.519 6604.012 - 6654.425: 0.0083% ( 1) 00:11:39.519 6654.425 - 6704.837: 0.0166% ( 1) 00:11:39.519 6755.249 - 6805.662: 0.0499% ( 4) 00:11:39.519 6805.662 - 6856.074: 0.1164% ( 8) 00:11:39.519 6856.074 - 6906.486: 0.2493% ( 16) 00:11:39.519 6906.486 - 6956.898: 0.4654% ( 26) 00:11:39.519 6956.898 - 7007.311: 0.8311% ( 44) 00:11:39.519 7007.311 - 7057.723: 1.3464% ( 62) 00:11:39.519 7057.723 - 7108.135: 1.6789% ( 40) 00:11:39.519 7108.135 - 7158.548: 2.0113% ( 40) 00:11:39.519 7158.548 - 7208.960: 2.3188% ( 37) 00:11:39.519 7208.960 - 7259.372: 2.8507% ( 64) 00:11:39.519 7259.372 - 7309.785: 3.0086% ( 19) 00:11:39.519 7309.785 - 7360.197: 3.3910% ( 46) 00:11:39.519 7360.197 - 7410.609: 3.6569% ( 32) 00:11:39.519 7410.609 - 7461.022: 3.9312% ( 33) 00:11:39.519 7461.022 - 7511.434: 4.1805% ( 30) 00:11:39.519 7511.434 - 7561.846: 4.8205% ( 77) 00:11:39.519 7561.846 - 7612.258: 5.1612% ( 41) 00:11:39.519 7612.258 - 7662.671: 5.4106% ( 30) 00:11:39.519 7662.671 - 7713.083: 5.5934% ( 22) 00:11:39.519 7713.083 - 7763.495: 5.8344% ( 29) 00:11:39.519 7763.495 - 7813.908: 6.2084% ( 45) 00:11:39.519 7813.908 - 7864.320: 6.5492% ( 41) 00:11:39.519 7864.320 - 7914.732: 6.7736% ( 27) 00:11:39.519 7914.732 - 7965.145: 7.1892% ( 50) 00:11:39.519 7965.145 - 8015.557: 7.4801% ( 35) 00:11:39.519 8015.557 - 8065.969: 7.7626% ( 34) 00:11:39.519 8065.969 - 8116.382: 8.0037% ( 29) 00:11:39.519 8116.382 - 8166.794: 8.1200% ( 14) 00:11:39.519 8166.794 - 8217.206: 8.2447% ( 15) 00:11:39.519 8217.206 - 8267.618: 8.3112% ( 8) 00:11:39.519 8267.618 - 8318.031: 8.3777% ( 8) 00:11:39.519 8318.031 - 8368.443: 8.4275% ( 6) 00:11:39.519 8368.443 - 8418.855: 8.5023% ( 9) 00:11:39.519 8418.855 - 8469.268: 8.5688% ( 8) 00:11:39.519 8469.268 - 8519.680: 8.6602% ( 11) 00:11:39.519 8519.680 - 8570.092: 8.8098% ( 18) 00:11:39.519 8570.092 - 8620.505: 8.9678% ( 19) 00:11:39.519 8620.505 - 8670.917: 9.1672% ( 24) 00:11:39.519 8670.917 - 8721.329: 9.4914% ( 39) 00:11:39.519 8721.329 - 8771.742: 9.9734% ( 58) 00:11:39.519 8771.742 - 8822.154: 10.4222% ( 54) 00:11:39.519 8822.154 - 8872.566: 10.9791% ( 67) 00:11:39.519 8872.566 - 8922.978: 11.6439% ( 80) 00:11:39.519 8922.978 - 8973.391: 12.3836% ( 89) 00:11:39.519 8973.391 - 9023.803: 13.2064% ( 99) 00:11:39.519 9023.803 - 9074.215: 14.4116% ( 145) 00:11:39.519 9074.215 - 9124.628: 15.6749% ( 152) 00:11:39.519 9124.628 - 9175.040: 17.2623% ( 191) 00:11:39.519 9175.040 - 9225.452: 18.9910% ( 208) 00:11:39.519 9225.452 - 9275.865: 20.8610% ( 225) 00:11:39.519 9275.865 - 9326.277: 22.7975% ( 233) 00:11:39.519 9326.277 - 9376.689: 24.5346% ( 209) 00:11:39.519 9376.689 - 9427.102: 26.4212% ( 227) 00:11:39.519 9427.102 - 9477.514: 28.0253% ( 193) 00:11:39.519 9477.514 - 9527.926: 29.6459% ( 195) 00:11:39.519 9527.926 - 9578.338: 31.4162% ( 213) 00:11:39.519 9578.338 - 9628.751: 33.0535% ( 197) 00:11:39.519 9628.751 - 9679.163: 34.6659% ( 194) 00:11:39.519 9679.163 - 9729.575: 36.3364% ( 201) 00:11:39.519 9729.575 - 9779.988: 37.9239% ( 191) 00:11:39.519 9779.988 - 9830.400: 39.3035% ( 166) 00:11:39.519 9830.400 - 9880.812: 40.9076% ( 193) 00:11:39.519 9880.812 - 9931.225: 42.5449% ( 197) 00:11:39.519 9931.225 - 9981.637: 44.1074% ( 188) 00:11:39.519 9981.637 - 10032.049: 45.5618% ( 175) 00:11:39.519 10032.049 - 10082.462: 46.9997% ( 173) 00:11:39.519 10082.462 - 10132.874: 48.4126% ( 170) 00:11:39.519 10132.874 - 10183.286: 49.5180% ( 133) 00:11:39.519 10183.286 - 10233.698: 50.8145% ( 156) 00:11:39.519 10233.698 - 10284.111: 51.9282% ( 134) 00:11:39.519 10284.111 - 10334.523: 52.9671% ( 125) 00:11:39.519 10334.523 - 10384.935: 53.9312% ( 116) 00:11:39.519 10384.935 - 10435.348: 55.0283% ( 132) 00:11:39.519 10435.348 - 10485.760: 55.8511% ( 99) 00:11:39.519 10485.760 - 10536.172: 56.8235% ( 117) 00:11:39.519 10536.172 - 10586.585: 57.9372% ( 134) 00:11:39.519 10586.585 - 10636.997: 58.9511% ( 122) 00:11:39.519 10636.997 - 10687.409: 59.8820% ( 112) 00:11:39.519 10687.409 - 10737.822: 60.9957% ( 134) 00:11:39.519 10737.822 - 10788.234: 62.2922% ( 156) 00:11:39.519 10788.234 - 10838.646: 63.3727% ( 130) 00:11:39.519 10838.646 - 10889.058: 64.4365% ( 128) 00:11:39.519 10889.058 - 10939.471: 65.2759% ( 101) 00:11:39.519 10939.471 - 10989.883: 66.0489% ( 93) 00:11:39.519 10989.883 - 11040.295: 66.7719% ( 87) 00:11:39.519 11040.295 - 11090.708: 67.4867% ( 86) 00:11:39.519 11090.708 - 11141.120: 68.3428% ( 103) 00:11:39.520 11141.120 - 11191.532: 69.0243% ( 82) 00:11:39.520 11191.532 - 11241.945: 69.7473% ( 87) 00:11:39.520 11241.945 - 11292.357: 70.6782% ( 112) 00:11:39.520 11292.357 - 11342.769: 71.3763% ( 84) 00:11:39.520 11342.769 - 11393.182: 72.0911% ( 86) 00:11:39.520 11393.182 - 11443.594: 72.9388% ( 102) 00:11:39.520 11443.594 - 11494.006: 73.6868% ( 90) 00:11:39.520 11494.006 - 11544.418: 74.3850% ( 84) 00:11:39.520 11544.418 - 11594.831: 75.0582% ( 81) 00:11:39.520 11594.831 - 11645.243: 75.9142% ( 103) 00:11:39.520 11645.243 - 11695.655: 76.7952% ( 106) 00:11:39.520 11695.655 - 11746.068: 77.7011% ( 109) 00:11:39.520 11746.068 - 11796.480: 78.4491% ( 90) 00:11:39.520 11796.480 - 11846.892: 79.1722% ( 87) 00:11:39.520 11846.892 - 11897.305: 79.8454% ( 81) 00:11:39.520 11897.305 - 11947.717: 80.4438% ( 72) 00:11:39.520 11947.717 - 11998.129: 81.0422% ( 72) 00:11:39.520 11998.129 - 12048.542: 81.7071% ( 80) 00:11:39.520 12048.542 - 12098.954: 82.5881% ( 106) 00:11:39.520 12098.954 - 12149.366: 83.2862% ( 84) 00:11:39.520 12149.366 - 12199.778: 83.9345% ( 78) 00:11:39.520 12199.778 - 12250.191: 84.5911% ( 79) 00:11:39.520 12250.191 - 12300.603: 85.2477% ( 79) 00:11:39.520 12300.603 - 12351.015: 85.8959% ( 78) 00:11:39.520 12351.015 - 12401.428: 86.4362% ( 65) 00:11:39.520 12401.428 - 12451.840: 87.0013% ( 68) 00:11:39.520 12451.840 - 12502.252: 87.5249% ( 63) 00:11:39.520 12502.252 - 12552.665: 87.9571% ( 52) 00:11:39.520 12552.665 - 12603.077: 88.3477% ( 47) 00:11:39.520 12603.077 - 12653.489: 88.7882% ( 53) 00:11:39.520 12653.489 - 12703.902: 89.1705% ( 46) 00:11:39.520 12703.902 - 12754.314: 89.6526% ( 58) 00:11:39.520 12754.314 - 12804.726: 89.9186% ( 32) 00:11:39.520 12804.726 - 12855.138: 90.1762% ( 31) 00:11:39.520 12855.138 - 12905.551: 90.5419% ( 44) 00:11:39.520 12905.551 - 13006.375: 91.1320% ( 71) 00:11:39.520 13006.375 - 13107.200: 91.6473% ( 62) 00:11:39.520 13107.200 - 13208.025: 92.1709% ( 63) 00:11:39.520 13208.025 - 13308.849: 92.5698% ( 48) 00:11:39.520 13308.849 - 13409.674: 92.8856% ( 38) 00:11:39.520 13409.674 - 13510.498: 93.2181% ( 40) 00:11:39.520 13510.498 - 13611.323: 93.5422% ( 39) 00:11:39.520 13611.323 - 13712.148: 93.8580% ( 38) 00:11:39.520 13712.148 - 13812.972: 94.1240% ( 32) 00:11:39.520 13812.972 - 13913.797: 94.3650% ( 29) 00:11:39.520 13913.797 - 14014.622: 94.7806% ( 50) 00:11:39.520 14014.622 - 14115.446: 95.0964% ( 38) 00:11:39.520 14115.446 - 14216.271: 95.5452% ( 54) 00:11:39.520 14216.271 - 14317.095: 95.9026% ( 43) 00:11:39.520 14317.095 - 14417.920: 96.3182% ( 50) 00:11:39.520 14417.920 - 14518.745: 96.7503% ( 52) 00:11:39.520 14518.745 - 14619.569: 97.0828% ( 40) 00:11:39.520 14619.569 - 14720.394: 97.3986% ( 38) 00:11:39.520 14720.394 - 14821.218: 97.7560% ( 43) 00:11:39.520 14821.218 - 14922.043: 97.9887% ( 28) 00:11:39.520 14922.043 - 15022.868: 98.1799% ( 23) 00:11:39.520 15022.868 - 15123.692: 98.3710% ( 23) 00:11:39.520 15123.692 - 15224.517: 98.5622% ( 23) 00:11:39.520 15224.517 - 15325.342: 98.6785% ( 14) 00:11:39.520 15325.342 - 15426.166: 98.7783% ( 12) 00:11:39.520 15426.166 - 15526.991: 98.8115% ( 4) 00:11:39.520 15526.991 - 15627.815: 98.8447% ( 4) 00:11:39.520 15627.815 - 15728.640: 98.8863% ( 5) 00:11:39.520 15728.640 - 15829.465: 98.9112% ( 3) 00:11:39.520 15829.465 - 15930.289: 98.9362% ( 3) 00:11:39.520 25306.978 - 25407.803: 98.9694% ( 4) 00:11:39.520 25407.803 - 25508.628: 98.9860% ( 2) 00:11:39.520 25508.628 - 25609.452: 99.0110% ( 3) 00:11:39.520 25609.452 - 25710.277: 99.0359% ( 3) 00:11:39.520 25710.277 - 25811.102: 99.0525% ( 2) 00:11:39.520 25811.102 - 26012.751: 99.1024% ( 6) 00:11:39.520 26012.751 - 26214.400: 99.1439% ( 5) 00:11:39.520 26214.400 - 26416.049: 99.1772% ( 4) 00:11:39.520 26416.049 - 26617.698: 99.2271% ( 6) 00:11:39.520 26617.698 - 26819.348: 99.2769% ( 6) 00:11:39.520 26819.348 - 27020.997: 99.3185% ( 5) 00:11:39.520 27020.997 - 27222.646: 99.3684% ( 6) 00:11:39.520 27222.646 - 27424.295: 99.4182% ( 6) 00:11:39.520 27424.295 - 27625.945: 99.4598% ( 5) 00:11:39.520 27625.945 - 27827.594: 99.4681% ( 1) 00:11:39.520 33675.422 - 33877.071: 99.4930% ( 3) 00:11:39.520 33877.071 - 34078.720: 99.5512% ( 7) 00:11:39.520 34078.720 - 34280.369: 99.6011% ( 6) 00:11:39.520 34280.369 - 34482.018: 99.6509% ( 6) 00:11:39.520 34482.018 - 34683.668: 99.7174% ( 8) 00:11:39.520 34683.668 - 34885.317: 99.7756% ( 7) 00:11:39.520 34885.317 - 35086.966: 99.8338% ( 7) 00:11:39.520 35086.966 - 35288.615: 99.8920% ( 7) 00:11:39.520 35288.615 - 35490.265: 99.9501% ( 7) 00:11:39.520 35490.265 - 35691.914: 100.0000% ( 6) 00:11:39.520 00:11:39.520 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:11:39.520 ============================================================================== 00:11:39.520 Range in us Cumulative IO count 00:11:39.520 6704.837 - 6755.249: 0.0083% ( 1) 00:11:39.520 6805.662 - 6856.074: 0.0997% ( 11) 00:11:39.520 6856.074 - 6906.486: 0.2493% ( 18) 00:11:39.520 6906.486 - 6956.898: 0.3989% ( 18) 00:11:39.520 6956.898 - 7007.311: 0.6150% ( 26) 00:11:39.520 7007.311 - 7057.723: 1.0555% ( 53) 00:11:39.520 7057.723 - 7108.135: 1.6373% ( 70) 00:11:39.520 7108.135 - 7158.548: 1.9614% ( 39) 00:11:39.520 7158.548 - 7208.960: 2.5598% ( 72) 00:11:39.520 7208.960 - 7259.372: 2.8840% ( 39) 00:11:39.520 7259.372 - 7309.785: 3.3245% ( 53) 00:11:39.520 7309.785 - 7360.197: 3.7317% ( 49) 00:11:39.520 7360.197 - 7410.609: 4.1223% ( 47) 00:11:39.520 7410.609 - 7461.022: 4.4299% ( 37) 00:11:39.520 7461.022 - 7511.434: 4.6626% ( 28) 00:11:39.520 7511.434 - 7561.846: 4.9867% ( 39) 00:11:39.520 7561.846 - 7612.258: 5.1779% ( 23) 00:11:39.520 7612.258 - 7662.671: 5.3191% ( 17) 00:11:39.520 7662.671 - 7713.083: 5.4854% ( 20) 00:11:39.520 7713.083 - 7763.495: 5.7015% ( 26) 00:11:39.520 7763.495 - 7813.908: 6.1752% ( 57) 00:11:39.520 7813.908 - 7864.320: 6.5824% ( 49) 00:11:39.520 7864.320 - 7914.732: 6.7985% ( 26) 00:11:39.520 7914.732 - 7965.145: 7.0229% ( 27) 00:11:39.520 7965.145 - 8015.557: 7.3471% ( 39) 00:11:39.520 8015.557 - 8065.969: 7.5632% ( 26) 00:11:39.520 8065.969 - 8116.382: 7.8374% ( 33) 00:11:39.520 8116.382 - 8166.794: 8.0701% ( 28) 00:11:39.520 8166.794 - 8217.206: 8.2281% ( 19) 00:11:39.520 8217.206 - 8267.618: 8.3278% ( 12) 00:11:39.520 8267.618 - 8318.031: 8.4026% ( 9) 00:11:39.520 8318.031 - 8368.443: 8.4441% ( 5) 00:11:39.520 8368.443 - 8418.855: 8.4940% ( 6) 00:11:39.520 8418.855 - 8469.268: 8.5273% ( 4) 00:11:39.520 8469.268 - 8519.680: 8.5771% ( 6) 00:11:39.520 8519.680 - 8570.092: 8.6353% ( 7) 00:11:39.520 8570.092 - 8620.505: 8.6852% ( 6) 00:11:39.520 8620.505 - 8670.917: 8.7683% ( 10) 00:11:39.520 8670.917 - 8721.329: 8.9096% ( 17) 00:11:39.520 8721.329 - 8771.742: 9.1589% ( 30) 00:11:39.520 8771.742 - 8822.154: 9.5578% ( 48) 00:11:39.520 8822.154 - 8872.566: 9.9651% ( 49) 00:11:39.520 8872.566 - 8922.978: 10.5386% ( 69) 00:11:39.520 8922.978 - 8973.391: 11.4112% ( 105) 00:11:39.520 8973.391 - 9023.803: 12.3338% ( 111) 00:11:39.520 9023.803 - 9074.215: 13.5721% ( 149) 00:11:39.520 9074.215 - 9124.628: 15.0848% ( 182) 00:11:39.520 9124.628 - 9175.040: 16.6307% ( 186) 00:11:39.520 9175.040 - 9225.452: 18.2929% ( 200) 00:11:39.520 9225.452 - 9275.865: 20.0299% ( 209) 00:11:39.520 9275.865 - 9326.277: 21.7337% ( 205) 00:11:39.520 9326.277 - 9376.689: 23.5539% ( 219) 00:11:39.520 9376.689 - 9427.102: 25.7480% ( 264) 00:11:39.520 9427.102 - 9477.514: 27.5515% ( 217) 00:11:39.520 9477.514 - 9527.926: 29.4548% ( 229) 00:11:39.520 9527.926 - 9578.338: 31.2417% ( 215) 00:11:39.520 9578.338 - 9628.751: 33.0535% ( 218) 00:11:39.520 9628.751 - 9679.163: 34.6991% ( 198) 00:11:39.520 9679.163 - 9729.575: 36.4362% ( 209) 00:11:39.520 9729.575 - 9779.988: 38.2397% ( 217) 00:11:39.520 9779.988 - 9830.400: 39.7191% ( 178) 00:11:39.520 9830.400 - 9880.812: 41.0156% ( 156) 00:11:39.520 9880.812 - 9931.225: 42.3454% ( 160) 00:11:39.520 9931.225 - 9981.637: 43.6503% ( 157) 00:11:39.520 9981.637 - 10032.049: 44.9717% ( 159) 00:11:39.520 10032.049 - 10082.462: 46.1769% ( 145) 00:11:39.520 10082.462 - 10132.874: 47.3487% ( 141) 00:11:39.520 10132.874 - 10183.286: 48.4707% ( 135) 00:11:39.520 10183.286 - 10233.698: 49.8504% ( 166) 00:11:39.520 10233.698 - 10284.111: 51.3215% ( 177) 00:11:39.520 10284.111 - 10334.523: 52.6596% ( 161) 00:11:39.520 10334.523 - 10384.935: 53.7483% ( 131) 00:11:39.520 10384.935 - 10435.348: 55.0449% ( 156) 00:11:39.520 10435.348 - 10485.760: 56.2417% ( 144) 00:11:39.520 10485.760 - 10536.172: 57.4053% ( 140) 00:11:39.520 10536.172 - 10586.585: 58.4441% ( 125) 00:11:39.520 10586.585 - 10636.997: 59.6160% ( 141) 00:11:39.520 10636.997 - 10687.409: 60.5303% ( 110) 00:11:39.520 10687.409 - 10737.822: 61.5525% ( 123) 00:11:39.520 10737.822 - 10788.234: 62.5499% ( 120) 00:11:39.520 10788.234 - 10838.646: 63.6054% ( 127) 00:11:39.520 10838.646 - 10889.058: 64.6277% ( 123) 00:11:39.520 10889.058 - 10939.471: 65.5336% ( 109) 00:11:39.520 10939.471 - 10989.883: 66.4894% ( 115) 00:11:39.520 10989.883 - 11040.295: 67.5116% ( 123) 00:11:39.520 11040.295 - 11090.708: 68.5090% ( 120) 00:11:39.520 11090.708 - 11141.120: 69.5312% ( 123) 00:11:39.520 11141.120 - 11191.532: 70.3956% ( 104) 00:11:39.520 11191.532 - 11241.945: 71.0938% ( 84) 00:11:39.520 11241.945 - 11292.357: 71.8667% ( 93) 00:11:39.520 11292.357 - 11342.769: 72.5565% ( 83) 00:11:39.520 11342.769 - 11393.182: 73.4791% ( 111) 00:11:39.521 11393.182 - 11443.594: 74.1190% ( 77) 00:11:39.521 11443.594 - 11494.006: 74.7257% ( 73) 00:11:39.521 11494.006 - 11544.418: 75.3989% ( 81) 00:11:39.521 11544.418 - 11594.831: 76.1885% ( 95) 00:11:39.521 11594.831 - 11645.243: 76.8949% ( 85) 00:11:39.521 11645.243 - 11695.655: 77.6346% ( 89) 00:11:39.521 11695.655 - 11746.068: 78.2912% ( 79) 00:11:39.521 11746.068 - 11796.480: 78.9561% ( 80) 00:11:39.521 11796.480 - 11846.892: 79.8205% ( 104) 00:11:39.521 11846.892 - 11897.305: 80.4189% ( 72) 00:11:39.521 11897.305 - 11947.717: 81.1004% ( 82) 00:11:39.521 11947.717 - 11998.129: 81.6822% ( 70) 00:11:39.521 11998.129 - 12048.542: 82.3138% ( 76) 00:11:39.521 12048.542 - 12098.954: 82.9704% ( 79) 00:11:39.521 12098.954 - 12149.366: 83.4109% ( 53) 00:11:39.521 12149.366 - 12199.778: 83.8597% ( 54) 00:11:39.521 12199.778 - 12250.191: 84.3584% ( 60) 00:11:39.521 12250.191 - 12300.603: 84.8737% ( 62) 00:11:39.521 12300.603 - 12351.015: 85.2892% ( 50) 00:11:39.521 12351.015 - 12401.428: 85.7214% ( 52) 00:11:39.521 12401.428 - 12451.840: 86.1702% ( 54) 00:11:39.521 12451.840 - 12502.252: 86.4694% ( 36) 00:11:39.521 12502.252 - 12552.665: 86.7437% ( 33) 00:11:39.521 12552.665 - 12603.077: 87.0678% ( 39) 00:11:39.521 12603.077 - 12653.489: 87.4335% ( 44) 00:11:39.521 12653.489 - 12703.902: 87.9405% ( 61) 00:11:39.521 12703.902 - 12754.314: 88.4558% ( 62) 00:11:39.521 12754.314 - 12804.726: 88.9545% ( 60) 00:11:39.521 12804.726 - 12855.138: 89.4531% ( 60) 00:11:39.521 12855.138 - 12905.551: 89.9435% ( 59) 00:11:39.521 12905.551 - 13006.375: 91.0489% ( 133) 00:11:39.521 13006.375 - 13107.200: 91.7636% ( 86) 00:11:39.521 13107.200 - 13208.025: 92.2706% ( 61) 00:11:39.521 13208.025 - 13308.849: 92.7942% ( 63) 00:11:39.521 13308.849 - 13409.674: 93.1599% ( 44) 00:11:39.521 13409.674 - 13510.498: 93.5505% ( 47) 00:11:39.521 13510.498 - 13611.323: 93.8747% ( 39) 00:11:39.521 13611.323 - 13712.148: 94.0658% ( 23) 00:11:39.521 13712.148 - 13812.972: 94.1822% ( 14) 00:11:39.521 13812.972 - 13913.797: 94.2736% ( 11) 00:11:39.521 13913.797 - 14014.622: 94.4066% ( 16) 00:11:39.521 14014.622 - 14115.446: 94.6393% ( 28) 00:11:39.521 14115.446 - 14216.271: 95.0216% ( 46) 00:11:39.521 14216.271 - 14317.095: 95.7447% ( 87) 00:11:39.521 14317.095 - 14417.920: 96.2517% ( 61) 00:11:39.521 14417.920 - 14518.745: 96.7088% ( 55) 00:11:39.521 14518.745 - 14619.569: 97.3321% ( 75) 00:11:39.521 14619.569 - 14720.394: 97.8142% ( 58) 00:11:39.521 14720.394 - 14821.218: 98.0801% ( 32) 00:11:39.521 14821.218 - 14922.043: 98.2713% ( 23) 00:11:39.521 14922.043 - 15022.868: 98.4458% ( 21) 00:11:39.521 15022.868 - 15123.692: 98.6370% ( 23) 00:11:39.521 15123.692 - 15224.517: 98.7783% ( 17) 00:11:39.521 15224.517 - 15325.342: 98.8863% ( 13) 00:11:39.521 15325.342 - 15426.166: 98.9362% ( 6) 00:11:39.521 24702.031 - 24802.855: 98.9528% ( 2) 00:11:39.521 24802.855 - 24903.680: 98.9860% ( 4) 00:11:39.521 24903.680 - 25004.505: 99.0193% ( 4) 00:11:39.521 25004.505 - 25105.329: 99.0359% ( 2) 00:11:39.521 25105.329 - 25206.154: 99.0691% ( 4) 00:11:39.521 25206.154 - 25306.978: 99.0858% ( 2) 00:11:39.521 25306.978 - 25407.803: 99.1273% ( 5) 00:11:39.521 25407.803 - 25508.628: 99.1523% ( 3) 00:11:39.521 25508.628 - 25609.452: 99.1938% ( 5) 00:11:39.521 25609.452 - 25710.277: 99.2188% ( 3) 00:11:39.521 25710.277 - 25811.102: 99.2437% ( 3) 00:11:39.521 25811.102 - 26012.751: 99.2852% ( 5) 00:11:39.521 26012.751 - 26214.400: 99.3268% ( 5) 00:11:39.521 26214.400 - 26416.049: 99.3684% ( 5) 00:11:39.521 26416.049 - 26617.698: 99.4099% ( 5) 00:11:39.521 26617.698 - 26819.348: 99.4515% ( 5) 00:11:39.521 26819.348 - 27020.997: 99.4681% ( 2) 00:11:39.521 31255.631 - 31457.280: 99.5346% ( 8) 00:11:39.521 31457.280 - 31658.929: 99.5844% ( 6) 00:11:39.521 32465.526 - 32667.175: 99.6343% ( 6) 00:11:39.521 32667.175 - 32868.825: 99.7008% ( 8) 00:11:39.521 32868.825 - 33070.474: 99.7590% ( 7) 00:11:39.521 33070.474 - 33272.123: 99.8255% ( 8) 00:11:39.521 33272.123 - 33473.772: 99.8836% ( 7) 00:11:39.521 33473.772 - 33675.422: 99.9418% ( 7) 00:11:39.521 33675.422 - 33877.071: 100.0000% ( 7) 00:11:39.521 00:11:39.521 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:11:39.521 ============================================================================== 00:11:39.521 Range in us Cumulative IO count 00:11:39.521 6604.012 - 6654.425: 0.0083% ( 1) 00:11:39.521 6755.249 - 6805.662: 0.0249% ( 2) 00:11:39.521 6805.662 - 6856.074: 0.0332% ( 1) 00:11:39.521 6856.074 - 6906.486: 0.0997% ( 8) 00:11:39.521 6906.486 - 6956.898: 0.2743% ( 21) 00:11:39.521 6956.898 - 7007.311: 0.5153% ( 29) 00:11:39.521 7007.311 - 7057.723: 0.8477% ( 40) 00:11:39.521 7057.723 - 7108.135: 1.3797% ( 64) 00:11:39.521 7108.135 - 7158.548: 1.9199% ( 65) 00:11:39.521 7158.548 - 7208.960: 2.6928% ( 93) 00:11:39.521 7208.960 - 7259.372: 3.0585% ( 44) 00:11:39.521 7259.372 - 7309.785: 3.6403% ( 70) 00:11:39.521 7309.785 - 7360.197: 3.9312% ( 35) 00:11:39.521 7360.197 - 7410.609: 4.1971% ( 32) 00:11:39.521 7410.609 - 7461.022: 4.4215% ( 27) 00:11:39.521 7461.022 - 7511.434: 4.6958% ( 33) 00:11:39.521 7511.434 - 7561.846: 5.0116% ( 38) 00:11:39.521 7561.846 - 7612.258: 5.1695% ( 19) 00:11:39.521 7612.258 - 7662.671: 5.3275% ( 19) 00:11:39.521 7662.671 - 7713.083: 5.4937% ( 20) 00:11:39.521 7713.083 - 7763.495: 5.9591% ( 56) 00:11:39.521 7763.495 - 7813.908: 6.3331% ( 45) 00:11:39.521 7813.908 - 7864.320: 6.6240% ( 35) 00:11:39.521 7864.320 - 7914.732: 6.7736% ( 18) 00:11:39.521 7914.732 - 7965.145: 6.9398% ( 20) 00:11:39.521 7965.145 - 8015.557: 7.2390% ( 36) 00:11:39.521 8015.557 - 8065.969: 7.4053% ( 20) 00:11:39.521 8065.969 - 8116.382: 7.5299% ( 15) 00:11:39.521 8116.382 - 8166.794: 7.8125% ( 34) 00:11:39.521 8166.794 - 8217.206: 7.9289% ( 14) 00:11:39.521 8217.206 - 8267.618: 8.0452% ( 14) 00:11:39.521 8267.618 - 8318.031: 8.2862% ( 29) 00:11:39.521 8318.031 - 8368.443: 8.3777% ( 11) 00:11:39.521 8368.443 - 8418.855: 8.4275% ( 6) 00:11:39.521 8418.855 - 8469.268: 8.4441% ( 2) 00:11:39.521 8469.268 - 8519.680: 8.4774% ( 4) 00:11:39.521 8519.680 - 8570.092: 8.5605% ( 10) 00:11:39.521 8570.092 - 8620.505: 8.6436% ( 10) 00:11:39.521 8620.505 - 8670.917: 8.7766% ( 16) 00:11:39.521 8670.917 - 8721.329: 8.9179% ( 17) 00:11:39.521 8721.329 - 8771.742: 9.1672% ( 30) 00:11:39.521 8771.742 - 8822.154: 9.6991% ( 64) 00:11:39.521 8822.154 - 8872.566: 10.0898% ( 47) 00:11:39.521 8872.566 - 8922.978: 10.7131% ( 75) 00:11:39.521 8922.978 - 8973.391: 11.5027% ( 95) 00:11:39.521 8973.391 - 9023.803: 12.6662% ( 140) 00:11:39.521 9023.803 - 9074.215: 13.8298% ( 140) 00:11:39.521 9074.215 - 9124.628: 15.1263% ( 156) 00:11:39.521 9124.628 - 9175.040: 16.7387% ( 194) 00:11:39.521 9175.040 - 9225.452: 18.3428% ( 193) 00:11:39.521 9225.452 - 9275.865: 20.1213% ( 214) 00:11:39.521 9275.865 - 9326.277: 21.8833% ( 212) 00:11:39.521 9326.277 - 9376.689: 23.6951% ( 218) 00:11:39.522 9376.689 - 9427.102: 25.3324% ( 197) 00:11:39.522 9427.102 - 9477.514: 27.0778% ( 210) 00:11:39.522 9477.514 - 9527.926: 28.8314% ( 211) 00:11:39.522 9527.926 - 9578.338: 31.0505% ( 267) 00:11:39.522 9578.338 - 9628.751: 33.0535% ( 241) 00:11:39.522 9628.751 - 9679.163: 34.8404% ( 215) 00:11:39.522 9679.163 - 9729.575: 36.4611% ( 195) 00:11:39.522 9729.575 - 9779.988: 37.9571% ( 180) 00:11:39.522 9779.988 - 9830.400: 39.6110% ( 199) 00:11:39.522 9830.400 - 9880.812: 41.0156% ( 169) 00:11:39.522 9880.812 - 9931.225: 42.5781% ( 188) 00:11:39.522 9931.225 - 9981.637: 43.7500% ( 141) 00:11:39.522 9981.637 - 10032.049: 45.0715% ( 159) 00:11:39.522 10032.049 - 10082.462: 46.2932% ( 147) 00:11:39.522 10082.462 - 10132.874: 47.3321% ( 125) 00:11:39.522 10132.874 - 10183.286: 48.3793% ( 126) 00:11:39.522 10183.286 - 10233.698: 49.5512% ( 141) 00:11:39.522 10233.698 - 10284.111: 50.7480% ( 144) 00:11:39.522 10284.111 - 10334.523: 52.1360% ( 167) 00:11:39.522 10334.523 - 10384.935: 53.3162% ( 142) 00:11:39.522 10384.935 - 10435.348: 54.5795% ( 152) 00:11:39.522 10435.348 - 10485.760: 56.0921% ( 182) 00:11:39.522 10485.760 - 10536.172: 57.1725% ( 130) 00:11:39.522 10536.172 - 10586.585: 58.3278% ( 139) 00:11:39.522 10586.585 - 10636.997: 59.4415% ( 134) 00:11:39.522 10636.997 - 10687.409: 60.5469% ( 133) 00:11:39.522 10687.409 - 10737.822: 61.7104% ( 140) 00:11:39.522 10737.822 - 10788.234: 62.7078% ( 120) 00:11:39.522 10788.234 - 10838.646: 63.7633% ( 127) 00:11:39.522 10838.646 - 10889.058: 65.0183% ( 151) 00:11:39.522 10889.058 - 10939.471: 66.1569% ( 137) 00:11:39.522 10939.471 - 10989.883: 67.1626% ( 121) 00:11:39.522 10989.883 - 11040.295: 68.0519% ( 107) 00:11:39.522 11040.295 - 11090.708: 69.0409% ( 119) 00:11:39.522 11090.708 - 11141.120: 70.0382% ( 120) 00:11:39.522 11141.120 - 11191.532: 70.7031% ( 80) 00:11:39.522 11191.532 - 11241.945: 71.3597% ( 79) 00:11:39.522 11241.945 - 11292.357: 71.9415% ( 70) 00:11:39.522 11292.357 - 11342.769: 72.6147% ( 81) 00:11:39.522 11342.769 - 11393.182: 73.3876% ( 93) 00:11:39.522 11393.182 - 11443.594: 74.2021% ( 98) 00:11:39.522 11443.594 - 11494.006: 75.1579% ( 115) 00:11:39.522 11494.006 - 11544.418: 76.1553% ( 120) 00:11:39.522 11544.418 - 11594.831: 76.8451% ( 83) 00:11:39.522 11594.831 - 11645.243: 77.4850% ( 77) 00:11:39.522 11645.243 - 11695.655: 78.2081% ( 87) 00:11:39.522 11695.655 - 11746.068: 78.7234% ( 62) 00:11:39.522 11746.068 - 11796.480: 79.1556% ( 52) 00:11:39.522 11796.480 - 11846.892: 79.5961% ( 53) 00:11:39.522 11846.892 - 11897.305: 80.0283% ( 52) 00:11:39.522 11897.305 - 11947.717: 80.5519% ( 63) 00:11:39.522 11947.717 - 11998.129: 81.0672% ( 62) 00:11:39.522 11998.129 - 12048.542: 81.5741% ( 61) 00:11:39.522 12048.542 - 12098.954: 82.1809% ( 73) 00:11:39.522 12098.954 - 12149.366: 82.7626% ( 70) 00:11:39.522 12149.366 - 12199.778: 83.2197% ( 55) 00:11:39.522 12199.778 - 12250.191: 83.6602% ( 53) 00:11:39.522 12250.191 - 12300.603: 84.1589% ( 60) 00:11:39.522 12300.603 - 12351.015: 84.6077% ( 54) 00:11:39.522 12351.015 - 12401.428: 85.2061% ( 72) 00:11:39.522 12401.428 - 12451.840: 85.6715% ( 56) 00:11:39.522 12451.840 - 12502.252: 86.1203% ( 54) 00:11:39.522 12502.252 - 12552.665: 86.5941% ( 57) 00:11:39.522 12552.665 - 12603.077: 87.0512% ( 55) 00:11:39.522 12603.077 - 12653.489: 87.5332% ( 58) 00:11:39.522 12653.489 - 12703.902: 87.9737% ( 53) 00:11:39.522 12703.902 - 12754.314: 88.4641% ( 59) 00:11:39.522 12754.314 - 12804.726: 88.8381% ( 45) 00:11:39.522 12804.726 - 12855.138: 89.2952% ( 55) 00:11:39.522 12855.138 - 12905.551: 89.8770% ( 70) 00:11:39.522 12905.551 - 13006.375: 90.7330% ( 103) 00:11:39.522 13006.375 - 13107.200: 91.2899% ( 67) 00:11:39.522 13107.200 - 13208.025: 91.8218% ( 64) 00:11:39.522 13208.025 - 13308.849: 92.1792% ( 43) 00:11:39.522 13308.849 - 13409.674: 92.6031% ( 51) 00:11:39.522 13409.674 - 13510.498: 92.9521% ( 42) 00:11:39.522 13510.498 - 13611.323: 93.3926% ( 53) 00:11:39.522 13611.323 - 13712.148: 93.9079% ( 62) 00:11:39.522 13712.148 - 13812.972: 94.3816% ( 57) 00:11:39.522 13812.972 - 13913.797: 95.0465% ( 80) 00:11:39.522 13913.797 - 14014.622: 95.5120% ( 56) 00:11:39.522 14014.622 - 14115.446: 95.9525% ( 53) 00:11:39.522 14115.446 - 14216.271: 96.2600% ( 37) 00:11:39.522 14216.271 - 14317.095: 96.4262% ( 20) 00:11:39.522 14317.095 - 14417.920: 96.6423% ( 26) 00:11:39.522 14417.920 - 14518.745: 96.9166% ( 33) 00:11:39.522 14518.745 - 14619.569: 97.1991% ( 34) 00:11:39.522 14619.569 - 14720.394: 97.7144% ( 62) 00:11:39.522 14720.394 - 14821.218: 97.9305% ( 26) 00:11:39.522 14821.218 - 14922.043: 98.1300% ( 24) 00:11:39.522 14922.043 - 15022.868: 98.3544% ( 27) 00:11:39.522 15022.868 - 15123.692: 98.5206% ( 20) 00:11:39.522 15123.692 - 15224.517: 98.6536% ( 16) 00:11:39.522 15224.517 - 15325.342: 98.7118% ( 7) 00:11:39.522 15325.342 - 15426.166: 98.7616% ( 6) 00:11:39.522 15426.166 - 15526.991: 98.8115% ( 6) 00:11:39.522 15526.991 - 15627.815: 98.8614% ( 6) 00:11:39.522 15627.815 - 15728.640: 98.9112% ( 6) 00:11:39.522 15728.640 - 15829.465: 98.9362% ( 3) 00:11:39.522 23693.785 - 23794.609: 98.9528% ( 2) 00:11:39.522 23794.609 - 23895.434: 98.9777% ( 3) 00:11:39.522 23895.434 - 23996.258: 98.9943% ( 2) 00:11:39.522 23996.258 - 24097.083: 99.0276% ( 4) 00:11:39.522 24097.083 - 24197.908: 99.0525% ( 3) 00:11:39.522 24197.908 - 24298.732: 99.0775% ( 3) 00:11:39.522 24298.732 - 24399.557: 99.1190% ( 5) 00:11:39.522 24399.557 - 24500.382: 99.1606% ( 5) 00:11:39.522 24500.382 - 24601.206: 99.1772% ( 2) 00:11:39.522 24601.206 - 24702.031: 99.1938% ( 2) 00:11:39.522 24702.031 - 24802.855: 99.2188% ( 3) 00:11:39.522 24802.855 - 24903.680: 99.2437% ( 3) 00:11:39.522 24903.680 - 25004.505: 99.2769% ( 4) 00:11:39.522 25004.505 - 25105.329: 99.3019% ( 3) 00:11:39.522 25105.329 - 25206.154: 99.3268% ( 3) 00:11:39.522 25206.154 - 25306.978: 99.3600% ( 4) 00:11:39.522 25306.978 - 25407.803: 99.3850% ( 3) 00:11:39.522 25407.803 - 25508.628: 99.4099% ( 3) 00:11:39.522 25508.628 - 25609.452: 99.4348% ( 3) 00:11:39.522 25609.452 - 25710.277: 99.4598% ( 3) 00:11:39.522 25710.277 - 25811.102: 99.4681% ( 1) 00:11:39.522 29642.437 - 29844.086: 99.5096% ( 5) 00:11:39.522 29844.086 - 30045.735: 99.6177% ( 13) 00:11:39.522 30852.332 - 31053.982: 99.6426% ( 3) 00:11:39.522 31053.982 - 31255.631: 99.7091% ( 8) 00:11:39.522 31255.631 - 31457.280: 99.7673% ( 7) 00:11:39.522 31457.280 - 31658.929: 99.8338% ( 8) 00:11:39.522 31658.929 - 31860.578: 99.8836% ( 6) 00:11:39.522 31860.578 - 32062.228: 99.9751% ( 11) 00:11:39.522 32062.228 - 32263.877: 100.0000% ( 3) 00:11:39.522 00:11:39.522 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:11:39.522 ============================================================================== 00:11:39.522 Range in us Cumulative IO count 00:11:39.522 6755.249 - 6805.662: 0.0166% ( 2) 00:11:39.522 6805.662 - 6856.074: 0.0499% ( 4) 00:11:39.522 6856.074 - 6906.486: 0.1080% ( 7) 00:11:39.522 6906.486 - 6956.898: 0.2576% ( 18) 00:11:39.522 6956.898 - 7007.311: 0.4737% ( 26) 00:11:39.522 7007.311 - 7057.723: 0.9558% ( 58) 00:11:39.522 7057.723 - 7108.135: 1.7370% ( 94) 00:11:39.522 7108.135 - 7158.548: 1.9864% ( 30) 00:11:39.522 7158.548 - 7208.960: 2.5765% ( 71) 00:11:39.522 7208.960 - 7259.372: 3.1001% ( 63) 00:11:39.522 7259.372 - 7309.785: 3.4408% ( 41) 00:11:39.522 7309.785 - 7360.197: 3.7234% ( 34) 00:11:39.522 7360.197 - 7410.609: 4.1805% ( 55) 00:11:39.522 7410.609 - 7461.022: 4.5296% ( 42) 00:11:39.522 7461.022 - 7511.434: 4.7789% ( 30) 00:11:39.522 7511.434 - 7561.846: 5.1779% ( 48) 00:11:39.522 7561.846 - 7612.258: 5.3441% ( 20) 00:11:39.522 7612.258 - 7662.671: 5.6100% ( 32) 00:11:39.522 7662.671 - 7713.083: 5.9092% ( 36) 00:11:39.522 7713.083 - 7763.495: 6.1669% ( 31) 00:11:39.522 7763.495 - 7813.908: 6.2749% ( 13) 00:11:39.522 7813.908 - 7864.320: 6.6157% ( 41) 00:11:39.522 7864.320 - 7914.732: 6.7320% ( 14) 00:11:39.522 7914.732 - 7965.145: 6.8733% ( 17) 00:11:39.522 7965.145 - 8015.557: 6.9980% ( 15) 00:11:39.522 8015.557 - 8065.969: 7.3221% ( 39) 00:11:39.522 8065.969 - 8116.382: 7.4136% ( 11) 00:11:39.522 8116.382 - 8166.794: 7.7626% ( 42) 00:11:39.522 8166.794 - 8217.206: 7.8291% ( 8) 00:11:39.522 8217.206 - 8267.618: 7.9455% ( 14) 00:11:39.522 8267.618 - 8318.031: 8.0452% ( 12) 00:11:39.522 8318.031 - 8368.443: 8.1117% ( 8) 00:11:39.522 8368.443 - 8418.855: 8.1782% ( 8) 00:11:39.522 8418.855 - 8469.268: 8.3943% ( 26) 00:11:39.522 8469.268 - 8519.680: 8.4774% ( 10) 00:11:39.522 8519.680 - 8570.092: 8.5771% ( 12) 00:11:39.522 8570.092 - 8620.505: 8.7350% ( 19) 00:11:39.522 8620.505 - 8670.917: 8.9179% ( 22) 00:11:39.522 8670.917 - 8721.329: 9.1755% ( 31) 00:11:39.522 8721.329 - 8771.742: 9.6243% ( 54) 00:11:39.522 8771.742 - 8822.154: 10.1729% ( 66) 00:11:39.522 8822.154 - 8872.566: 10.6715% ( 60) 00:11:39.522 8872.566 - 8922.978: 11.2284% ( 67) 00:11:39.522 8922.978 - 8973.391: 11.9265% ( 84) 00:11:39.522 8973.391 - 9023.803: 13.0319% ( 133) 00:11:39.522 9023.803 - 9074.215: 14.2204% ( 143) 00:11:39.522 9074.215 - 9124.628: 15.6499% ( 172) 00:11:39.522 9124.628 - 9175.040: 17.4202% ( 213) 00:11:39.522 9175.040 - 9225.452: 18.9993% ( 190) 00:11:39.522 9225.452 - 9275.865: 20.9358% ( 233) 00:11:39.522 9275.865 - 9326.277: 23.0219% ( 251) 00:11:39.522 9326.277 - 9376.689: 24.6343% ( 194) 00:11:39.523 9376.689 - 9427.102: 26.2799% ( 198) 00:11:39.523 9427.102 - 9477.514: 27.6928% ( 170) 00:11:39.523 9477.514 - 9527.926: 29.0974% ( 169) 00:11:39.523 9527.926 - 9578.338: 30.6350% ( 185) 00:11:39.523 9578.338 - 9628.751: 32.3969% ( 212) 00:11:39.523 9628.751 - 9679.163: 34.1340% ( 209) 00:11:39.523 9679.163 - 9729.575: 35.6383% ( 181) 00:11:39.523 9729.575 - 9779.988: 37.1842% ( 186) 00:11:39.523 9779.988 - 9830.400: 38.6303% ( 174) 00:11:39.523 9830.400 - 9880.812: 40.3840% ( 211) 00:11:39.523 9880.812 - 9931.225: 42.1127% ( 208) 00:11:39.523 9931.225 - 9981.637: 43.6918% ( 190) 00:11:39.523 9981.637 - 10032.049: 45.1463% ( 175) 00:11:39.523 10032.049 - 10082.462: 46.3431% ( 144) 00:11:39.523 10082.462 - 10132.874: 47.4983% ( 139) 00:11:39.523 10132.874 - 10183.286: 48.5871% ( 131) 00:11:39.523 10183.286 - 10233.698: 49.9086% ( 159) 00:11:39.523 10233.698 - 10284.111: 51.0638% ( 139) 00:11:39.523 10284.111 - 10334.523: 52.0944% ( 124) 00:11:39.523 10334.523 - 10384.935: 53.1499% ( 127) 00:11:39.523 10384.935 - 10435.348: 54.3135% ( 140) 00:11:39.523 10435.348 - 10485.760: 55.6267% ( 158) 00:11:39.523 10485.760 - 10536.172: 56.7570% ( 136) 00:11:39.523 10536.172 - 10586.585: 58.0120% ( 151) 00:11:39.523 10586.585 - 10636.997: 59.2670% ( 151) 00:11:39.523 10636.997 - 10687.409: 60.4388% ( 141) 00:11:39.523 10687.409 - 10737.822: 61.7520% ( 158) 00:11:39.523 10737.822 - 10788.234: 62.9322% ( 142) 00:11:39.523 10788.234 - 10838.646: 64.0126% ( 130) 00:11:39.523 10838.646 - 10889.058: 65.0432% ( 124) 00:11:39.523 10889.058 - 10939.471: 65.7912% ( 90) 00:11:39.523 10939.471 - 10989.883: 66.5392% ( 90) 00:11:39.523 10989.883 - 11040.295: 67.5033% ( 116) 00:11:39.523 11040.295 - 11090.708: 68.4840% ( 118) 00:11:39.523 11090.708 - 11141.120: 69.2404% ( 91) 00:11:39.523 11141.120 - 11191.532: 70.0465% ( 97) 00:11:39.523 11191.532 - 11241.945: 70.8693% ( 99) 00:11:39.523 11241.945 - 11292.357: 71.6922% ( 99) 00:11:39.523 11292.357 - 11342.769: 72.5565% ( 104) 00:11:39.523 11342.769 - 11393.182: 73.4791% ( 111) 00:11:39.523 11393.182 - 11443.594: 74.3434% ( 104) 00:11:39.523 11443.594 - 11494.006: 74.9834% ( 77) 00:11:39.523 11494.006 - 11544.418: 75.8810% ( 108) 00:11:39.523 11544.418 - 11594.831: 76.5126% ( 76) 00:11:39.523 11594.831 - 11645.243: 77.1027% ( 71) 00:11:39.523 11645.243 - 11695.655: 77.7178% ( 74) 00:11:39.523 11695.655 - 11746.068: 78.4990% ( 94) 00:11:39.523 11746.068 - 11796.480: 79.0642% ( 68) 00:11:39.523 11796.480 - 11846.892: 79.5462% ( 58) 00:11:39.523 11846.892 - 11897.305: 79.9867% ( 53) 00:11:39.523 11897.305 - 11947.717: 80.3773% ( 47) 00:11:39.523 11947.717 - 11998.129: 80.8012% ( 51) 00:11:39.523 11998.129 - 12048.542: 81.2251% ( 51) 00:11:39.523 12048.542 - 12098.954: 81.7985% ( 69) 00:11:39.523 12098.954 - 12149.366: 82.3637% ( 68) 00:11:39.523 12149.366 - 12199.778: 83.0037% ( 77) 00:11:39.523 12199.778 - 12250.191: 83.6270% ( 75) 00:11:39.523 12250.191 - 12300.603: 84.1589% ( 64) 00:11:39.523 12300.603 - 12351.015: 84.7490% ( 71) 00:11:39.523 12351.015 - 12401.428: 85.2975% ( 66) 00:11:39.523 12401.428 - 12451.840: 85.7131% ( 50) 00:11:39.523 12451.840 - 12502.252: 86.1868% ( 57) 00:11:39.523 12502.252 - 12552.665: 86.7271% ( 65) 00:11:39.523 12552.665 - 12603.077: 87.2756% ( 66) 00:11:39.523 12603.077 - 12653.489: 87.6828% ( 49) 00:11:39.523 12653.489 - 12703.902: 88.0901% ( 49) 00:11:39.523 12703.902 - 12754.314: 88.5306% ( 53) 00:11:39.523 12754.314 - 12804.726: 89.0459% ( 62) 00:11:39.523 12804.726 - 12855.138: 89.4781% ( 52) 00:11:39.523 12855.138 - 12905.551: 89.8604% ( 46) 00:11:39.523 12905.551 - 13006.375: 90.3923% ( 64) 00:11:39.523 13006.375 - 13107.200: 91.0572% ( 80) 00:11:39.523 13107.200 - 13208.025: 91.6556% ( 72) 00:11:39.523 13208.025 - 13308.849: 92.1709% ( 62) 00:11:39.523 13308.849 - 13409.674: 92.5947% ( 51) 00:11:39.523 13409.674 - 13510.498: 93.0519% ( 55) 00:11:39.523 13510.498 - 13611.323: 93.6253% ( 69) 00:11:39.523 13611.323 - 13712.148: 93.9993% ( 45) 00:11:39.523 13712.148 - 13812.972: 94.3816% ( 46) 00:11:39.523 13812.972 - 13913.797: 94.7889% ( 49) 00:11:39.523 13913.797 - 14014.622: 95.1546% ( 44) 00:11:39.523 14014.622 - 14115.446: 95.6117% ( 55) 00:11:39.523 14115.446 - 14216.271: 95.9691% ( 43) 00:11:39.523 14216.271 - 14317.095: 96.1769% ( 25) 00:11:39.523 14317.095 - 14417.920: 96.4594% ( 34) 00:11:39.523 14417.920 - 14518.745: 96.8251% ( 44) 00:11:39.523 14518.745 - 14619.569: 97.1576% ( 40) 00:11:39.523 14619.569 - 14720.394: 97.5233% ( 44) 00:11:39.523 14720.394 - 14821.218: 97.8640% ( 41) 00:11:39.523 14821.218 - 14922.043: 98.1134% ( 30) 00:11:39.523 14922.043 - 15022.868: 98.3959% ( 34) 00:11:39.523 15022.868 - 15123.692: 98.5622% ( 20) 00:11:39.523 15123.692 - 15224.517: 98.7201% ( 19) 00:11:39.523 15224.517 - 15325.342: 98.8115% ( 11) 00:11:39.523 15325.342 - 15426.166: 98.8364% ( 3) 00:11:39.523 15426.166 - 15526.991: 98.8697% ( 4) 00:11:39.523 15526.991 - 15627.815: 98.9029% ( 4) 00:11:39.523 15627.815 - 15728.640: 98.9279% ( 3) 00:11:39.523 15728.640 - 15829.465: 98.9362% ( 1) 00:11:39.523 22483.889 - 22584.714: 98.9445% ( 1) 00:11:39.523 22584.714 - 22685.538: 98.9860% ( 5) 00:11:39.523 22685.538 - 22786.363: 99.0193% ( 4) 00:11:39.523 22786.363 - 22887.188: 99.0608% ( 5) 00:11:39.523 22887.188 - 22988.012: 99.0941% ( 4) 00:11:39.523 22988.012 - 23088.837: 99.1273% ( 4) 00:11:39.523 23088.837 - 23189.662: 99.1689% ( 5) 00:11:39.523 23189.662 - 23290.486: 99.1938% ( 3) 00:11:39.523 23290.486 - 23391.311: 99.2354% ( 5) 00:11:39.523 23391.311 - 23492.135: 99.2520% ( 2) 00:11:39.523 23492.135 - 23592.960: 99.2686% ( 2) 00:11:39.523 23592.960 - 23693.785: 99.2852% ( 2) 00:11:39.523 23693.785 - 23794.609: 99.3019% ( 2) 00:11:39.523 23794.609 - 23895.434: 99.3268% ( 3) 00:11:39.523 23895.434 - 23996.258: 99.3434% ( 2) 00:11:39.523 23996.258 - 24097.083: 99.3600% ( 2) 00:11:39.523 24097.083 - 24197.908: 99.3767% ( 2) 00:11:39.523 24197.908 - 24298.732: 99.4016% ( 3) 00:11:39.523 24298.732 - 24399.557: 99.4265% ( 3) 00:11:39.523 24399.557 - 24500.382: 99.4515% ( 3) 00:11:39.523 24500.382 - 24601.206: 99.4681% ( 2) 00:11:39.523 28230.892 - 28432.542: 99.6011% ( 16) 00:11:39.523 28432.542 - 28634.191: 99.6592% ( 7) 00:11:39.523 29239.138 - 29440.788: 99.6676% ( 1) 00:11:39.523 29440.788 - 29642.437: 99.7257% ( 7) 00:11:39.523 29642.437 - 29844.086: 99.7922% ( 8) 00:11:39.523 29844.086 - 30045.735: 99.8587% ( 8) 00:11:39.523 30045.735 - 30247.385: 99.9252% ( 8) 00:11:39.523 30247.385 - 30449.034: 99.9834% ( 7) 00:11:39.523 30449.034 - 30650.683: 100.0000% ( 2) 00:11:39.523 00:11:39.523 ************************************ 00:11:39.523 END TEST nvme_perf 00:11:39.523 ************************************ 00:11:39.523 06:37:51 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:11:39.523 00:11:39.523 real 0m2.587s 00:11:39.523 user 0m2.254s 00:11:39.523 sys 0m0.206s 00:11:39.523 06:37:51 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:39.523 06:37:51 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:11:39.523 06:37:51 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:11:39.523 06:37:51 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:39.523 06:37:51 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:39.523 06:37:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:39.523 ************************************ 00:11:39.523 START TEST nvme_hello_world 00:11:39.523 ************************************ 00:11:39.523 06:37:51 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:11:39.523 Initializing NVMe Controllers 00:11:39.523 Attached to 0000:00:10.0 00:11:39.523 Namespace ID: 1 size: 6GB 00:11:39.523 Attached to 0000:00:11.0 00:11:39.523 Namespace ID: 1 size: 5GB 00:11:39.523 Attached to 0000:00:13.0 00:11:39.523 Namespace ID: 1 size: 1GB 00:11:39.523 Attached to 0000:00:12.0 00:11:39.523 Namespace ID: 1 size: 4GB 00:11:39.523 Namespace ID: 2 size: 4GB 00:11:39.523 Namespace ID: 3 size: 4GB 00:11:39.523 Initialization complete. 00:11:39.523 INFO: using host memory buffer for IO 00:11:39.523 Hello world! 00:11:39.523 INFO: using host memory buffer for IO 00:11:39.523 Hello world! 00:11:39.523 INFO: using host memory buffer for IO 00:11:39.523 Hello world! 00:11:39.523 INFO: using host memory buffer for IO 00:11:39.523 Hello world! 00:11:39.523 INFO: using host memory buffer for IO 00:11:39.523 Hello world! 00:11:39.523 INFO: using host memory buffer for IO 00:11:39.523 Hello world! 00:11:39.523 00:11:39.523 real 0m0.264s 00:11:39.523 user 0m0.100s 00:11:39.523 sys 0m0.115s 00:11:39.523 ************************************ 00:11:39.523 END TEST nvme_hello_world 00:11:39.523 ************************************ 00:11:39.524 06:37:52 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:39.524 06:37:52 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:11:39.524 06:37:52 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:11:39.524 06:37:52 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:39.524 06:37:52 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:39.524 06:37:52 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:39.524 ************************************ 00:11:39.524 START TEST nvme_sgl 00:11:39.524 ************************************ 00:11:39.524 06:37:52 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:11:39.796 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:11:39.796 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:11:39.796 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:11:39.797 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:11:39.797 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:11:39.797 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:11:39.797 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:11:39.797 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:11:39.797 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:11:39.797 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:11:39.797 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:11:39.797 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:11:39.797 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:11:39.797 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:11:39.797 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:11:39.797 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:11:39.797 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:11:39.797 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:11:39.797 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:11:39.797 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:11:39.797 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:11:39.797 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:11:39.797 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:11:39.797 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:11:39.797 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:11:39.797 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:11:39.797 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:11:39.797 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:11:39.797 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:11:39.797 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:11:39.797 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:11:39.797 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:11:39.797 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:11:39.797 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:11:39.797 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:11:39.797 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:11:39.797 NVMe Readv/Writev Request test 00:11:39.797 Attached to 0000:00:10.0 00:11:39.797 Attached to 0000:00:11.0 00:11:39.797 Attached to 0000:00:13.0 00:11:39.797 Attached to 0000:00:12.0 00:11:39.797 0000:00:10.0: build_io_request_2 test passed 00:11:39.797 0000:00:10.0: build_io_request_4 test passed 00:11:39.797 0000:00:10.0: build_io_request_5 test passed 00:11:39.797 0000:00:10.0: build_io_request_6 test passed 00:11:39.797 0000:00:10.0: build_io_request_7 test passed 00:11:39.797 0000:00:10.0: build_io_request_10 test passed 00:11:39.797 0000:00:11.0: build_io_request_2 test passed 00:11:39.797 0000:00:11.0: build_io_request_4 test passed 00:11:39.797 0000:00:11.0: build_io_request_5 test passed 00:11:39.797 0000:00:11.0: build_io_request_6 test passed 00:11:39.797 0000:00:11.0: build_io_request_7 test passed 00:11:39.797 0000:00:11.0: build_io_request_10 test passed 00:11:39.797 Cleaning up... 00:11:39.797 00:11:39.797 real 0m0.279s 00:11:39.797 user 0m0.143s 00:11:39.797 sys 0m0.096s 00:11:39.797 ************************************ 00:11:39.797 END TEST nvme_sgl 00:11:39.797 ************************************ 00:11:39.797 06:37:52 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:39.797 06:37:52 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:11:40.069 06:37:52 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:11:40.069 06:37:52 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:40.069 06:37:52 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:40.069 06:37:52 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:40.069 ************************************ 00:11:40.069 START TEST nvme_e2edp 00:11:40.069 ************************************ 00:11:40.069 06:37:52 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:11:40.069 NVMe Write/Read with End-to-End data protection test 00:11:40.069 Attached to 0000:00:10.0 00:11:40.069 Attached to 0000:00:11.0 00:11:40.069 Attached to 0000:00:13.0 00:11:40.069 Attached to 0000:00:12.0 00:11:40.069 Cleaning up... 00:11:40.069 00:11:40.069 real 0m0.211s 00:11:40.069 user 0m0.071s 00:11:40.069 sys 0m0.094s 00:11:40.069 ************************************ 00:11:40.069 END TEST nvme_e2edp 00:11:40.069 ************************************ 00:11:40.069 06:37:52 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:40.069 06:37:52 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:11:40.070 06:37:52 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:11:40.070 06:37:52 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:40.070 06:37:52 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:40.070 06:37:52 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:40.070 ************************************ 00:11:40.070 START TEST nvme_reserve 00:11:40.070 ************************************ 00:11:40.070 06:37:52 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:11:40.330 ===================================================== 00:11:40.330 NVMe Controller at PCI bus 0, device 16, function 0 00:11:40.330 ===================================================== 00:11:40.330 Reservations: Not Supported 00:11:40.330 ===================================================== 00:11:40.330 NVMe Controller at PCI bus 0, device 17, function 0 00:11:40.330 ===================================================== 00:11:40.330 Reservations: Not Supported 00:11:40.330 ===================================================== 00:11:40.330 NVMe Controller at PCI bus 0, device 19, function 0 00:11:40.330 ===================================================== 00:11:40.330 Reservations: Not Supported 00:11:40.330 ===================================================== 00:11:40.330 NVMe Controller at PCI bus 0, device 18, function 0 00:11:40.330 ===================================================== 00:11:40.330 Reservations: Not Supported 00:11:40.330 Reservation test passed 00:11:40.330 00:11:40.330 real 0m0.215s 00:11:40.330 user 0m0.074s 00:11:40.330 sys 0m0.093s 00:11:40.330 06:37:53 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:40.330 ************************************ 00:11:40.330 END TEST nvme_reserve 00:11:40.330 ************************************ 00:11:40.330 06:37:53 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:11:40.330 06:37:53 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:11:40.330 06:37:53 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:40.330 06:37:53 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:40.330 06:37:53 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:40.330 ************************************ 00:11:40.330 START TEST nvme_err_injection 00:11:40.330 ************************************ 00:11:40.330 06:37:53 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:11:40.589 NVMe Error Injection test 00:11:40.589 Attached to 0000:00:10.0 00:11:40.589 Attached to 0000:00:11.0 00:11:40.589 Attached to 0000:00:13.0 00:11:40.589 Attached to 0000:00:12.0 00:11:40.589 0000:00:10.0: get features failed as expected 00:11:40.589 0000:00:11.0: get features failed as expected 00:11:40.589 0000:00:13.0: get features failed as expected 00:11:40.589 0000:00:12.0: get features failed as expected 00:11:40.589 0000:00:10.0: get features successfully as expected 00:11:40.589 0000:00:11.0: get features successfully as expected 00:11:40.589 0000:00:13.0: get features successfully as expected 00:11:40.589 0000:00:12.0: get features successfully as expected 00:11:40.589 0000:00:10.0: read failed as expected 00:11:40.589 0000:00:11.0: read failed as expected 00:11:40.589 0000:00:13.0: read failed as expected 00:11:40.589 0000:00:12.0: read failed as expected 00:11:40.589 0000:00:10.0: read successfully as expected 00:11:40.589 0000:00:11.0: read successfully as expected 00:11:40.589 0000:00:13.0: read successfully as expected 00:11:40.589 0000:00:12.0: read successfully as expected 00:11:40.589 Cleaning up... 00:11:40.589 ************************************ 00:11:40.589 END TEST nvme_err_injection 00:11:40.589 ************************************ 00:11:40.589 00:11:40.589 real 0m0.247s 00:11:40.589 user 0m0.088s 00:11:40.589 sys 0m0.110s 00:11:40.589 06:37:53 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:40.589 06:37:53 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:11:40.848 06:37:53 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:11:40.848 06:37:53 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:11:40.848 06:37:53 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:40.848 06:37:53 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:40.848 ************************************ 00:11:40.848 START TEST nvme_overhead 00:11:40.848 ************************************ 00:11:40.848 06:37:53 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:11:42.229 Initializing NVMe Controllers 00:11:42.229 Attached to 0000:00:10.0 00:11:42.229 Attached to 0000:00:11.0 00:11:42.229 Attached to 0000:00:13.0 00:11:42.229 Attached to 0000:00:12.0 00:11:42.229 Initialization complete. Launching workers. 00:11:42.229 submit (in ns) avg, min, max = 11850.4, 10120.0, 362360.8 00:11:42.229 complete (in ns) avg, min, max = 8076.8, 7280.0, 81541.5 00:11:42.229 00:11:42.229 Submit histogram 00:11:42.229 ================ 00:11:42.229 Range in us Cumulative Count 00:11:42.229 10.092 - 10.142: 0.0073% ( 1) 00:11:42.229 10.191 - 10.240: 0.0146% ( 1) 00:11:42.229 10.240 - 10.289: 0.0219% ( 1) 00:11:42.229 10.289 - 10.338: 0.0292% ( 1) 00:11:42.229 10.585 - 10.634: 0.0365% ( 1) 00:11:42.229 10.880 - 10.929: 0.0511% ( 2) 00:11:42.229 10.929 - 10.978: 0.1387% ( 12) 00:11:42.229 10.978 - 11.028: 0.2263% ( 12) 00:11:42.229 11.028 - 11.077: 0.4965% ( 37) 00:11:42.229 11.077 - 11.126: 1.1827% ( 94) 00:11:42.229 11.126 - 11.175: 2.9276% ( 239) 00:11:42.229 11.175 - 11.225: 6.0305% ( 425) 00:11:42.229 11.225 - 11.274: 11.5500% ( 756) 00:11:42.229 11.274 - 11.323: 18.9458% ( 1013) 00:11:42.229 11.323 - 11.372: 26.7431% ( 1068) 00:11:42.229 11.372 - 11.422: 33.7373% ( 958) 00:11:42.229 11.422 - 11.471: 40.0745% ( 868) 00:11:42.229 11.471 - 11.520: 45.7180% ( 773) 00:11:42.229 11.520 - 11.569: 51.8581% ( 841) 00:11:42.229 11.569 - 11.618: 58.4508% ( 903) 00:11:42.229 11.618 - 11.668: 64.6565% ( 850) 00:11:42.229 11.668 - 11.717: 69.8255% ( 708) 00:11:42.229 11.717 - 11.766: 73.8921% ( 557) 00:11:42.229 11.766 - 11.815: 77.0096% ( 427) 00:11:42.229 11.815 - 11.865: 79.7036% ( 369) 00:11:42.229 11.865 - 11.914: 81.7259% ( 277) 00:11:42.229 11.914 - 11.963: 83.5511% ( 250) 00:11:42.229 11.963 - 12.012: 85.1281% ( 216) 00:11:42.229 12.012 - 12.062: 86.7489% ( 222) 00:11:42.229 12.062 - 12.111: 88.2091% ( 200) 00:11:42.229 12.111 - 12.160: 89.5087% ( 178) 00:11:42.229 12.160 - 12.209: 90.7206% ( 166) 00:11:42.229 12.209 - 12.258: 91.7938% ( 147) 00:11:42.229 12.258 - 12.308: 92.6845% ( 122) 00:11:42.229 12.308 - 12.357: 93.2832% ( 82) 00:11:42.229 12.357 - 12.406: 93.8819% ( 82) 00:11:42.229 12.406 - 12.455: 94.2834% ( 55) 00:11:42.229 12.455 - 12.505: 94.6412% ( 49) 00:11:42.229 12.505 - 12.554: 94.8529% ( 29) 00:11:42.229 12.554 - 12.603: 95.0573% ( 28) 00:11:42.229 12.603 - 12.702: 95.3493% ( 40) 00:11:42.229 12.702 - 12.800: 95.6268% ( 38) 00:11:42.229 12.800 - 12.898: 95.6925% ( 9) 00:11:42.229 12.898 - 12.997: 95.8312% ( 19) 00:11:42.229 12.997 - 13.095: 95.9407% ( 15) 00:11:42.229 13.095 - 13.194: 96.0210% ( 11) 00:11:42.229 13.194 - 13.292: 96.1013% ( 11) 00:11:42.229 13.292 - 13.391: 96.1670% ( 9) 00:11:42.229 13.391 - 13.489: 96.2255% ( 8) 00:11:42.229 13.489 - 13.588: 96.2985% ( 10) 00:11:42.229 13.588 - 13.686: 96.3715% ( 10) 00:11:42.229 13.686 - 13.785: 96.5029% ( 18) 00:11:42.229 13.785 - 13.883: 96.6343% ( 18) 00:11:42.229 13.883 - 13.982: 96.7365% ( 14) 00:11:42.229 13.982 - 14.080: 96.8898% ( 21) 00:11:42.229 14.080 - 14.178: 96.9701% ( 11) 00:11:42.229 14.178 - 14.277: 97.0943% ( 17) 00:11:42.229 14.277 - 14.375: 97.1819% ( 12) 00:11:42.229 14.375 - 14.474: 97.2476% ( 9) 00:11:42.229 14.474 - 14.572: 97.3498% ( 14) 00:11:42.229 14.572 - 14.671: 97.4301% ( 11) 00:11:42.229 14.671 - 14.769: 97.5250% ( 13) 00:11:42.229 14.769 - 14.868: 97.5980% ( 10) 00:11:42.229 14.868 - 14.966: 97.6783% ( 11) 00:11:42.229 14.966 - 15.065: 97.7367% ( 8) 00:11:42.229 15.065 - 15.163: 97.7951% ( 8) 00:11:42.229 15.163 - 15.262: 97.8462% ( 7) 00:11:42.229 15.262 - 15.360: 97.8754% ( 4) 00:11:42.229 15.360 - 15.458: 97.9047% ( 4) 00:11:42.229 15.458 - 15.557: 97.9704% ( 9) 00:11:42.229 15.557 - 15.655: 98.0142% ( 6) 00:11:42.229 15.655 - 15.754: 98.0653% ( 7) 00:11:42.229 15.754 - 15.852: 98.0726% ( 1) 00:11:42.229 15.852 - 15.951: 98.0945% ( 3) 00:11:42.229 15.951 - 16.049: 98.1456% ( 7) 00:11:42.229 16.049 - 16.148: 98.1602% ( 2) 00:11:42.229 16.148 - 16.246: 98.1821% ( 3) 00:11:42.229 16.246 - 16.345: 98.1894% ( 1) 00:11:42.229 16.345 - 16.443: 98.2186% ( 4) 00:11:42.229 16.443 - 16.542: 98.2478% ( 4) 00:11:42.229 16.542 - 16.640: 98.2551% ( 1) 00:11:42.229 16.738 - 16.837: 98.2624% ( 1) 00:11:42.229 16.837 - 16.935: 98.3062% ( 6) 00:11:42.229 16.935 - 17.034: 98.3208% ( 2) 00:11:42.229 17.034 - 17.132: 98.3500% ( 4) 00:11:42.229 17.132 - 17.231: 98.4084% ( 8) 00:11:42.229 17.231 - 17.329: 98.4376% ( 4) 00:11:42.229 17.329 - 17.428: 98.5179% ( 11) 00:11:42.229 17.428 - 17.526: 98.5617% ( 6) 00:11:42.229 17.526 - 17.625: 98.6493% ( 12) 00:11:42.229 17.625 - 17.723: 98.7369% ( 12) 00:11:42.229 17.723 - 17.822: 98.7954% ( 8) 00:11:42.229 17.822 - 17.920: 98.8319% ( 5) 00:11:42.229 17.920 - 18.018: 98.9195% ( 12) 00:11:42.229 18.018 - 18.117: 98.9487% ( 4) 00:11:42.229 18.117 - 18.215: 99.0144% ( 9) 00:11:42.229 18.215 - 18.314: 99.0874% ( 10) 00:11:42.229 18.314 - 18.412: 99.1677% ( 11) 00:11:42.229 18.412 - 18.511: 99.2334% ( 9) 00:11:42.229 18.511 - 18.609: 99.2991% ( 9) 00:11:42.229 18.609 - 18.708: 99.3356% ( 5) 00:11:42.229 18.708 - 18.806: 99.3867% ( 7) 00:11:42.229 18.806 - 18.905: 99.3940% ( 1) 00:11:42.229 18.905 - 19.003: 99.4451% ( 7) 00:11:42.229 19.003 - 19.102: 99.4524% ( 1) 00:11:42.229 19.102 - 19.200: 99.4743% ( 3) 00:11:42.229 19.200 - 19.298: 99.5254% ( 7) 00:11:42.229 19.298 - 19.397: 99.5400% ( 2) 00:11:42.229 19.397 - 19.495: 99.5546% ( 2) 00:11:42.229 19.495 - 19.594: 99.5619% ( 1) 00:11:42.229 19.594 - 19.692: 99.5839% ( 3) 00:11:42.229 19.889 - 19.988: 99.5985% ( 2) 00:11:42.229 19.988 - 20.086: 99.6058% ( 1) 00:11:42.230 20.086 - 20.185: 99.6131% ( 1) 00:11:42.230 20.185 - 20.283: 99.6204% ( 1) 00:11:42.230 20.283 - 20.382: 99.6277% ( 1) 00:11:42.230 20.480 - 20.578: 99.6350% ( 1) 00:11:42.230 20.677 - 20.775: 99.6423% ( 1) 00:11:42.230 20.874 - 20.972: 99.6715% ( 4) 00:11:42.230 20.972 - 21.071: 99.6788% ( 1) 00:11:42.230 21.169 - 21.268: 99.6934% ( 2) 00:11:42.230 21.465 - 21.563: 99.7007% ( 1) 00:11:42.230 21.563 - 21.662: 99.7080% ( 1) 00:11:42.230 21.858 - 21.957: 99.7153% ( 1) 00:11:42.230 22.351 - 22.449: 99.7299% ( 2) 00:11:42.230 22.449 - 22.548: 99.7372% ( 1) 00:11:42.230 23.434 - 23.532: 99.7445% ( 1) 00:11:42.230 23.532 - 23.631: 99.7518% ( 1) 00:11:42.230 23.729 - 23.828: 99.7664% ( 2) 00:11:42.230 24.025 - 24.123: 99.7737% ( 1) 00:11:42.230 24.123 - 24.222: 99.7956% ( 3) 00:11:42.230 24.517 - 24.615: 99.8029% ( 1) 00:11:42.230 25.600 - 25.797: 99.8102% ( 1) 00:11:42.230 25.797 - 25.994: 99.8248% ( 2) 00:11:42.230 25.994 - 26.191: 99.8321% ( 1) 00:11:42.230 26.782 - 26.978: 99.8394% ( 1) 00:11:42.230 26.978 - 27.175: 99.8467% ( 1) 00:11:42.230 27.175 - 27.372: 99.8540% ( 1) 00:11:42.230 27.963 - 28.160: 99.8613% ( 1) 00:11:42.230 28.751 - 28.948: 99.8686% ( 1) 00:11:42.230 29.342 - 29.538: 99.8759% ( 1) 00:11:42.230 30.326 - 30.523: 99.8832% ( 1) 00:11:42.230 30.720 - 30.917: 99.8905% ( 1) 00:11:42.230 32.689 - 32.886: 99.8978% ( 1) 00:11:42.230 34.658 - 34.855: 99.9051% ( 1) 00:11:42.230 38.597 - 38.794: 99.9124% ( 1) 00:11:42.230 44.111 - 44.308: 99.9197% ( 1) 00:11:42.230 44.505 - 44.702: 99.9270% ( 1) 00:11:42.230 48.837 - 49.034: 99.9343% ( 1) 00:11:42.230 49.625 - 49.822: 99.9416% ( 1) 00:11:42.230 50.412 - 50.806: 99.9489% ( 1) 00:11:42.230 56.714 - 57.108: 99.9562% ( 1) 00:11:42.230 59.865 - 60.258: 99.9635% ( 1) 00:11:42.230 64.197 - 64.591: 99.9708% ( 1) 00:11:42.230 64.985 - 65.378: 99.9781% ( 1) 00:11:42.230 66.166 - 66.560: 99.9854% ( 1) 00:11:42.230 104.763 - 105.551: 99.9927% ( 1) 00:11:42.230 362.338 - 363.914: 100.0000% ( 1) 00:11:42.230 00:11:42.230 Complete histogram 00:11:42.230 ================== 00:11:42.230 Range in us Cumulative Count 00:11:42.230 7.237 - 7.286: 0.0073% ( 1) 00:11:42.230 7.286 - 7.335: 0.0365% ( 4) 00:11:42.230 7.335 - 7.385: 0.6498% ( 84) 00:11:42.230 7.385 - 7.434: 4.0666% ( 468) 00:11:42.230 7.434 - 7.483: 11.3529% ( 998) 00:11:42.230 7.483 - 7.532: 20.2818% ( 1223) 00:11:42.230 7.532 - 7.582: 27.9769% ( 1054) 00:11:42.230 7.582 - 7.631: 34.8470% ( 941) 00:11:42.230 7.631 - 7.680: 40.2497% ( 740) 00:11:42.230 7.680 - 7.729: 45.0245% ( 654) 00:11:42.230 7.729 - 7.778: 48.5362% ( 481) 00:11:42.230 7.778 - 7.828: 51.1061% ( 352) 00:11:42.230 7.828 - 7.877: 53.3840% ( 312) 00:11:42.230 7.877 - 7.926: 54.8806% ( 205) 00:11:42.230 7.926 - 7.975: 56.1875% ( 179) 00:11:42.230 7.975 - 8.025: 57.2534% ( 146) 00:11:42.230 8.025 - 8.074: 58.6625% ( 193) 00:11:42.230 8.074 - 8.123: 61.7361% ( 421) 00:11:42.230 8.123 - 8.172: 67.2775% ( 759) 00:11:42.230 8.172 - 8.222: 73.0525% ( 791) 00:11:42.230 8.222 - 8.271: 78.1777% ( 702) 00:11:42.230 8.271 - 8.320: 82.6677% ( 615) 00:11:42.230 8.320 - 8.369: 85.9020% ( 443) 00:11:42.230 8.369 - 8.418: 88.2383% ( 320) 00:11:42.230 8.418 - 8.468: 90.2168% ( 271) 00:11:42.230 8.468 - 8.517: 91.6332% ( 194) 00:11:42.230 8.517 - 8.566: 92.6845% ( 144) 00:11:42.230 8.566 - 8.615: 93.3562% ( 92) 00:11:42.230 8.615 - 8.665: 93.9330% ( 79) 00:11:42.230 8.665 - 8.714: 94.3491% ( 57) 00:11:42.230 8.714 - 8.763: 94.5901% ( 33) 00:11:42.230 8.763 - 8.812: 94.9478% ( 49) 00:11:42.230 8.812 - 8.862: 95.2033% ( 35) 00:11:42.230 8.862 - 8.911: 95.4297% ( 31) 00:11:42.230 8.911 - 8.960: 95.7144% ( 39) 00:11:42.230 8.960 - 9.009: 95.9991% ( 39) 00:11:42.230 9.009 - 9.058: 96.1743% ( 24) 00:11:42.230 9.058 - 9.108: 96.3277% ( 21) 00:11:42.230 9.108 - 9.157: 96.4737% ( 20) 00:11:42.230 9.157 - 9.206: 96.6051% ( 18) 00:11:42.230 9.206 - 9.255: 96.7511% ( 20) 00:11:42.230 9.255 - 9.305: 96.8679% ( 16) 00:11:42.230 9.305 - 9.354: 96.9774% ( 15) 00:11:42.230 9.354 - 9.403: 97.0139% ( 5) 00:11:42.230 9.403 - 9.452: 97.0504% ( 5) 00:11:42.230 9.452 - 9.502: 97.0943% ( 6) 00:11:42.230 9.502 - 9.551: 97.1381% ( 6) 00:11:42.230 9.551 - 9.600: 97.1527% ( 2) 00:11:42.230 9.600 - 9.649: 97.1965% ( 6) 00:11:42.230 9.649 - 9.698: 97.2257% ( 4) 00:11:42.230 9.698 - 9.748: 97.2549% ( 4) 00:11:42.230 9.748 - 9.797: 97.2622% ( 1) 00:11:42.230 9.797 - 9.846: 97.2914% ( 4) 00:11:42.230 9.846 - 9.895: 97.3133% ( 3) 00:11:42.230 9.895 - 9.945: 97.3206% ( 1) 00:11:42.230 9.945 - 9.994: 97.3498% ( 4) 00:11:42.230 9.994 - 10.043: 97.3790% ( 4) 00:11:42.230 10.043 - 10.092: 97.3936% ( 2) 00:11:42.230 10.092 - 10.142: 97.4082% ( 2) 00:11:42.230 10.142 - 10.191: 97.4447% ( 5) 00:11:42.230 10.191 - 10.240: 97.4739% ( 4) 00:11:42.230 10.240 - 10.289: 97.4812% ( 1) 00:11:42.230 10.289 - 10.338: 97.4885% ( 1) 00:11:42.230 10.338 - 10.388: 97.5177% ( 4) 00:11:42.230 10.388 - 10.437: 97.5542% ( 5) 00:11:42.230 10.437 - 10.486: 97.5834% ( 4) 00:11:42.230 10.486 - 10.535: 97.6053% ( 3) 00:11:42.230 10.535 - 10.585: 97.6491% ( 6) 00:11:42.230 10.585 - 10.634: 97.6929% ( 6) 00:11:42.230 10.634 - 10.683: 97.7221% ( 4) 00:11:42.230 10.683 - 10.732: 97.7878% ( 9) 00:11:42.230 10.732 - 10.782: 97.8024% ( 2) 00:11:42.230 10.782 - 10.831: 97.8243% ( 3) 00:11:42.230 10.831 - 10.880: 97.8389% ( 2) 00:11:42.230 10.880 - 10.929: 97.8681% ( 4) 00:11:42.230 10.929 - 10.978: 97.8900% ( 3) 00:11:42.230 10.978 - 11.028: 97.9193% ( 4) 00:11:42.230 11.028 - 11.077: 97.9485% ( 4) 00:11:42.230 11.077 - 11.126: 97.9777% ( 4) 00:11:42.230 11.175 - 11.225: 98.0069% ( 4) 00:11:42.230 11.225 - 11.274: 98.0288% ( 3) 00:11:42.230 11.274 - 11.323: 98.0653% ( 5) 00:11:42.230 11.323 - 11.372: 98.0872% ( 3) 00:11:42.230 11.372 - 11.422: 98.1091% ( 3) 00:11:42.230 11.422 - 11.471: 98.1675% ( 8) 00:11:42.230 11.471 - 11.520: 98.1967% ( 4) 00:11:42.230 11.520 - 11.569: 98.2040% ( 1) 00:11:42.230 11.569 - 11.618: 98.2259% ( 3) 00:11:42.230 11.618 - 11.668: 98.2697% ( 6) 00:11:42.230 11.668 - 11.717: 98.2843% ( 2) 00:11:42.230 11.717 - 11.766: 98.3062% ( 3) 00:11:42.230 11.766 - 11.815: 98.3208% ( 2) 00:11:42.230 11.815 - 11.865: 98.3281% ( 1) 00:11:42.230 11.865 - 11.914: 98.3354% ( 1) 00:11:42.230 11.914 - 11.963: 98.3427% ( 1) 00:11:42.230 11.963 - 12.012: 98.3573% ( 2) 00:11:42.230 12.012 - 12.062: 98.3646% ( 1) 00:11:42.230 12.209 - 12.258: 98.3719% ( 1) 00:11:42.230 12.258 - 12.308: 98.3792% ( 1) 00:11:42.230 12.308 - 12.357: 98.3865% ( 1) 00:11:42.230 12.357 - 12.406: 98.3938% ( 1) 00:11:42.230 12.406 - 12.455: 98.4084% ( 2) 00:11:42.230 12.455 - 12.505: 98.4157% ( 1) 00:11:42.230 12.554 - 12.603: 98.4230% ( 1) 00:11:42.230 12.603 - 12.702: 98.4376% ( 2) 00:11:42.230 12.800 - 12.898: 98.4449% ( 1) 00:11:42.230 13.095 - 13.194: 98.4595% ( 2) 00:11:42.230 13.194 - 13.292: 98.4814% ( 3) 00:11:42.230 13.292 - 13.391: 98.4887% ( 1) 00:11:42.230 13.391 - 13.489: 98.5179% ( 4) 00:11:42.230 13.489 - 13.588: 98.5325% ( 2) 00:11:42.230 13.588 - 13.686: 98.5836% ( 7) 00:11:42.230 13.686 - 13.785: 98.5982% ( 2) 00:11:42.230 13.785 - 13.883: 98.6201% ( 3) 00:11:42.230 13.883 - 13.982: 98.6858% ( 9) 00:11:42.230 13.982 - 14.080: 98.7369% ( 7) 00:11:42.230 14.080 - 14.178: 98.7881% ( 7) 00:11:42.230 14.178 - 14.277: 98.8684% ( 11) 00:11:42.230 14.277 - 14.375: 98.9706% ( 14) 00:11:42.230 14.375 - 14.474: 99.0363% ( 9) 00:11:42.230 14.474 - 14.572: 99.1093% ( 10) 00:11:42.230 14.572 - 14.671: 99.2042% ( 13) 00:11:42.230 14.671 - 14.769: 99.2407% ( 5) 00:11:42.230 14.769 - 14.868: 99.2991% ( 8) 00:11:42.230 14.868 - 14.966: 99.3575% ( 8) 00:11:42.230 14.966 - 15.065: 99.4451% ( 12) 00:11:42.230 15.065 - 15.163: 99.4962% ( 7) 00:11:42.230 15.163 - 15.262: 99.5327% ( 5) 00:11:42.230 15.262 - 15.360: 99.5765% ( 6) 00:11:42.230 15.360 - 15.458: 99.6277% ( 7) 00:11:42.230 15.458 - 15.557: 99.6423% ( 2) 00:11:42.230 15.557 - 15.655: 99.6569% ( 2) 00:11:42.230 15.655 - 15.754: 99.6861% ( 4) 00:11:42.230 15.951 - 16.049: 99.7080% ( 3) 00:11:42.230 16.049 - 16.148: 99.7299% ( 3) 00:11:42.230 16.148 - 16.246: 99.7372% ( 1) 00:11:42.230 16.345 - 16.443: 99.7445% ( 1) 00:11:42.230 16.542 - 16.640: 99.7591% ( 2) 00:11:42.230 16.738 - 16.837: 99.7664% ( 1) 00:11:42.230 16.837 - 16.935: 99.7737% ( 1) 00:11:42.230 17.231 - 17.329: 99.7810% ( 1) 00:11:42.230 17.526 - 17.625: 99.7883% ( 1) 00:11:42.231 17.723 - 17.822: 99.7956% ( 1) 00:11:42.231 18.117 - 18.215: 99.8102% ( 2) 00:11:42.231 18.314 - 18.412: 99.8175% ( 1) 00:11:42.231 18.412 - 18.511: 99.8321% ( 2) 00:11:42.231 18.806 - 18.905: 99.8467% ( 2) 00:11:42.231 19.200 - 19.298: 99.8540% ( 1) 00:11:42.231 19.397 - 19.495: 99.8613% ( 1) 00:11:42.231 19.594 - 19.692: 99.8686% ( 1) 00:11:42.231 19.692 - 19.791: 99.8759% ( 1) 00:11:42.231 20.086 - 20.185: 99.8832% ( 1) 00:11:42.231 20.185 - 20.283: 99.8905% ( 1) 00:11:42.231 20.382 - 20.480: 99.8978% ( 1) 00:11:42.231 20.578 - 20.677: 99.9051% ( 1) 00:11:42.231 20.677 - 20.775: 99.9124% ( 1) 00:11:42.231 21.268 - 21.366: 99.9197% ( 1) 00:11:42.231 21.662 - 21.760: 99.9270% ( 1) 00:11:42.231 21.957 - 22.055: 99.9343% ( 1) 00:11:42.231 23.237 - 23.335: 99.9416% ( 1) 00:11:42.231 23.335 - 23.434: 99.9489% ( 1) 00:11:42.231 31.508 - 31.705: 99.9562% ( 1) 00:11:42.231 32.689 - 32.886: 99.9635% ( 1) 00:11:42.231 38.991 - 39.188: 99.9708% ( 1) 00:11:42.231 49.428 - 49.625: 99.9781% ( 1) 00:11:42.231 53.169 - 53.563: 99.9854% ( 1) 00:11:42.231 74.831 - 75.225: 99.9927% ( 1) 00:11:42.231 81.526 - 81.920: 100.0000% ( 1) 00:11:42.231 00:11:42.231 ************************************ 00:11:42.231 END TEST nvme_overhead 00:11:42.231 ************************************ 00:11:42.231 00:11:42.231 real 0m1.226s 00:11:42.231 user 0m1.078s 00:11:42.231 sys 0m0.095s 00:11:42.231 06:37:54 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:42.231 06:37:54 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:11:42.231 06:37:54 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:11:42.231 06:37:54 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:11:42.231 06:37:54 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:42.231 06:37:54 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:42.231 ************************************ 00:11:42.231 START TEST nvme_arbitration 00:11:42.231 ************************************ 00:11:42.231 06:37:54 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:11:45.529 Initializing NVMe Controllers 00:11:45.529 Attached to 0000:00:10.0 00:11:45.529 Attached to 0000:00:11.0 00:11:45.529 Attached to 0000:00:13.0 00:11:45.529 Attached to 0000:00:12.0 00:11:45.529 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:11:45.529 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:11:45.529 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:11:45.529 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:11:45.529 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:11:45.529 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:11:45.529 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:11:45.529 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:11:45.529 Initialization complete. Launching workers. 00:11:45.529 Starting thread on core 1 with urgent priority queue 00:11:45.529 Starting thread on core 2 with urgent priority queue 00:11:45.529 Starting thread on core 3 with urgent priority queue 00:11:45.529 Starting thread on core 0 with urgent priority queue 00:11:45.529 QEMU NVMe Ctrl (12340 ) core 0: 717.67 IO/s 139.34 secs/100000 ios 00:11:45.529 QEMU NVMe Ctrl (12342 ) core 0: 704.00 IO/s 142.05 secs/100000 ios 00:11:45.529 QEMU NVMe Ctrl (12341 ) core 1: 727.33 IO/s 137.49 secs/100000 ios 00:11:45.529 QEMU NVMe Ctrl (12342 ) core 1: 742.00 IO/s 134.77 secs/100000 ios 00:11:45.529 QEMU NVMe Ctrl (12343 ) core 2: 712.00 IO/s 140.45 secs/100000 ios 00:11:45.529 QEMU NVMe Ctrl (12342 ) core 3: 752.33 IO/s 132.92 secs/100000 ios 00:11:45.530 ======================================================== 00:11:45.530 00:11:45.530 ************************************ 00:11:45.530 END TEST nvme_arbitration 00:11:45.530 ************************************ 00:11:45.530 00:11:45.530 real 0m3.319s 00:11:45.530 user 0m9.231s 00:11:45.530 sys 0m0.115s 00:11:45.530 06:37:57 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:45.530 06:37:57 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:11:45.530 06:37:57 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:11:45.530 06:37:57 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:45.530 06:37:57 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:45.530 06:37:57 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:45.530 ************************************ 00:11:45.530 START TEST nvme_single_aen 00:11:45.530 ************************************ 00:11:45.530 06:37:57 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:11:45.530 Asynchronous Event Request test 00:11:45.530 Attached to 0000:00:10.0 00:11:45.530 Attached to 0000:00:11.0 00:11:45.530 Attached to 0000:00:13.0 00:11:45.530 Attached to 0000:00:12.0 00:11:45.530 Reset controller to setup AER completions for this process 00:11:45.530 Registering asynchronous event callbacks... 00:11:45.530 Getting orig temperature thresholds of all controllers 00:11:45.530 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:45.530 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:45.530 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:45.530 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:45.530 Setting all controllers temperature threshold low to trigger AER 00:11:45.530 Waiting for all controllers temperature threshold to be set lower 00:11:45.530 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:45.530 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:11:45.530 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:45.530 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:11:45.530 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:45.530 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:11:45.530 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:45.530 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:11:45.530 Waiting for all controllers to trigger AER and reset threshold 00:11:45.530 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:45.530 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:45.530 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:45.530 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:45.530 Cleaning up... 00:11:45.530 ************************************ 00:11:45.530 END TEST nvme_single_aen 00:11:45.530 ************************************ 00:11:45.530 00:11:45.530 real 0m0.226s 00:11:45.530 user 0m0.075s 00:11:45.530 sys 0m0.101s 00:11:45.530 06:37:58 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:45.530 06:37:58 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:11:45.530 06:37:58 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:11:45.530 06:37:58 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:45.530 06:37:58 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:45.530 06:37:58 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:45.530 ************************************ 00:11:45.530 START TEST nvme_doorbell_aers 00:11:45.530 ************************************ 00:11:45.530 06:37:58 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:11:45.530 06:37:58 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:11:45.530 06:37:58 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:11:45.530 06:37:58 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:11:45.530 06:37:58 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:11:45.530 06:37:58 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:11:45.530 06:37:58 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:11:45.530 06:37:58 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:45.530 06:37:58 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:45.530 06:37:58 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:11:45.791 06:37:58 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:11:45.791 06:37:58 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:45.791 06:37:58 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:45.791 06:37:58 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:11:45.791 [2024-12-06 06:37:58.487451] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63469) is not found. Dropping the request. 00:11:55.907 Executing: test_write_invalid_db 00:11:55.907 Waiting for AER completion... 00:11:55.907 Failure: test_write_invalid_db 00:11:55.907 00:11:55.907 Executing: test_invalid_db_write_overflow_sq 00:11:55.907 Waiting for AER completion... 00:11:55.907 Failure: test_invalid_db_write_overflow_sq 00:11:55.907 00:11:55.907 Executing: test_invalid_db_write_overflow_cq 00:11:55.907 Waiting for AER completion... 00:11:55.907 Failure: test_invalid_db_write_overflow_cq 00:11:55.907 00:11:55.907 06:38:08 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:55.907 06:38:08 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:11:55.907 [2024-12-06 06:38:08.506486] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63469) is not found. Dropping the request. 00:12:05.902 Executing: test_write_invalid_db 00:12:05.902 Waiting for AER completion... 00:12:05.902 Failure: test_write_invalid_db 00:12:05.902 00:12:05.902 Executing: test_invalid_db_write_overflow_sq 00:12:05.902 Waiting for AER completion... 00:12:05.902 Failure: test_invalid_db_write_overflow_sq 00:12:05.902 00:12:05.902 Executing: test_invalid_db_write_overflow_cq 00:12:05.902 Waiting for AER completion... 00:12:05.902 Failure: test_invalid_db_write_overflow_cq 00:12:05.902 00:12:05.902 06:38:18 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:12:05.903 06:38:18 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:12:05.903 [2024-12-06 06:38:18.544048] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63469) is not found. Dropping the request. 00:12:15.871 Executing: test_write_invalid_db 00:12:15.871 Waiting for AER completion... 00:12:15.871 Failure: test_write_invalid_db 00:12:15.871 00:12:15.871 Executing: test_invalid_db_write_overflow_sq 00:12:15.871 Waiting for AER completion... 00:12:15.871 Failure: test_invalid_db_write_overflow_sq 00:12:15.871 00:12:15.871 Executing: test_invalid_db_write_overflow_cq 00:12:15.871 Waiting for AER completion... 00:12:15.871 Failure: test_invalid_db_write_overflow_cq 00:12:15.871 00:12:15.872 06:38:28 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:12:15.872 06:38:28 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:12:15.872 [2024-12-06 06:38:28.578724] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63469) is not found. Dropping the request. 00:12:25.888 Executing: test_write_invalid_db 00:12:25.888 Waiting for AER completion... 00:12:25.888 Failure: test_write_invalid_db 00:12:25.888 00:12:25.888 Executing: test_invalid_db_write_overflow_sq 00:12:25.888 Waiting for AER completion... 00:12:25.888 Failure: test_invalid_db_write_overflow_sq 00:12:25.888 00:12:25.888 Executing: test_invalid_db_write_overflow_cq 00:12:25.888 Waiting for AER completion... 00:12:25.888 Failure: test_invalid_db_write_overflow_cq 00:12:25.888 00:12:25.888 ************************************ 00:12:25.888 END TEST nvme_doorbell_aers 00:12:25.888 ************************************ 00:12:25.888 00:12:25.888 real 0m40.182s 00:12:25.888 user 0m34.201s 00:12:25.888 sys 0m5.599s 00:12:25.888 06:38:38 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:25.888 06:38:38 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:12:25.888 06:38:38 nvme -- nvme/nvme.sh@97 -- # uname 00:12:25.888 06:38:38 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:12:25.888 06:38:38 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:12:25.888 06:38:38 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:12:25.888 06:38:38 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:25.888 06:38:38 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:25.888 ************************************ 00:12:25.888 START TEST nvme_multi_aen 00:12:25.888 ************************************ 00:12:25.888 06:38:38 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:12:26.148 [2024-12-06 06:38:38.630990] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63469) is not found. Dropping the request. 00:12:26.148 [2024-12-06 06:38:38.631058] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63469) is not found. Dropping the request. 00:12:26.148 [2024-12-06 06:38:38.631070] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63469) is not found. Dropping the request. 00:12:26.148 [2024-12-06 06:38:38.632764] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63469) is not found. Dropping the request. 00:12:26.148 [2024-12-06 06:38:38.632813] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63469) is not found. Dropping the request. 00:12:26.148 [2024-12-06 06:38:38.632824] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63469) is not found. Dropping the request. 00:12:26.148 [2024-12-06 06:38:38.633943] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63469) is not found. Dropping the request. 00:12:26.148 [2024-12-06 06:38:38.633972] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63469) is not found. Dropping the request. 00:12:26.148 [2024-12-06 06:38:38.633982] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63469) is not found. Dropping the request. 00:12:26.148 [2024-12-06 06:38:38.635168] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63469) is not found. Dropping the request. 00:12:26.148 [2024-12-06 06:38:38.635196] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63469) is not found. Dropping the request. 00:12:26.148 [2024-12-06 06:38:38.635206] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63469) is not found. Dropping the request. 00:12:26.148 Child process pid: 63990 00:12:26.148 [Child] Asynchronous Event Request test 00:12:26.148 [Child] Attached to 0000:00:10.0 00:12:26.148 [Child] Attached to 0000:00:11.0 00:12:26.148 [Child] Attached to 0000:00:13.0 00:12:26.148 [Child] Attached to 0000:00:12.0 00:12:26.148 [Child] Registering asynchronous event callbacks... 00:12:26.148 [Child] Getting orig temperature thresholds of all controllers 00:12:26.148 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:26.148 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:26.148 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:26.148 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:26.148 [Child] Waiting for all controllers to trigger AER and reset threshold 00:12:26.148 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:26.148 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:26.148 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:26.148 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:26.148 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:26.148 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:26.148 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:26.148 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:26.148 [Child] Cleaning up... 00:12:26.409 Asynchronous Event Request test 00:12:26.409 Attached to 0000:00:10.0 00:12:26.409 Attached to 0000:00:11.0 00:12:26.409 Attached to 0000:00:13.0 00:12:26.409 Attached to 0000:00:12.0 00:12:26.409 Reset controller to setup AER completions for this process 00:12:26.409 Registering asynchronous event callbacks... 00:12:26.409 Getting orig temperature thresholds of all controllers 00:12:26.409 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:26.409 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:26.409 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:26.409 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:26.409 Setting all controllers temperature threshold low to trigger AER 00:12:26.409 Waiting for all controllers temperature threshold to be set lower 00:12:26.409 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:26.409 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:12:26.409 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:26.409 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:12:26.409 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:26.409 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:12:26.409 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:26.409 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:12:26.409 Waiting for all controllers to trigger AER and reset threshold 00:12:26.409 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:26.409 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:26.409 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:26.409 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:26.409 Cleaning up... 00:12:26.409 ************************************ 00:12:26.409 END TEST nvme_multi_aen 00:12:26.409 ************************************ 00:12:26.409 00:12:26.409 real 0m0.465s 00:12:26.409 user 0m0.150s 00:12:26.409 sys 0m0.190s 00:12:26.409 06:38:38 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:26.409 06:38:38 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:12:26.409 06:38:38 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:12:26.409 06:38:38 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:26.409 06:38:38 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:26.409 06:38:38 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:26.409 ************************************ 00:12:26.409 START TEST nvme_startup 00:12:26.409 ************************************ 00:12:26.409 06:38:38 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:12:26.669 Initializing NVMe Controllers 00:12:26.669 Attached to 0000:00:10.0 00:12:26.669 Attached to 0000:00:11.0 00:12:26.669 Attached to 0000:00:13.0 00:12:26.669 Attached to 0000:00:12.0 00:12:26.669 Initialization complete. 00:12:26.669 Time used:149161.859 (us). 00:12:26.669 ************************************ 00:12:26.669 END TEST nvme_startup 00:12:26.669 ************************************ 00:12:26.669 00:12:26.669 real 0m0.217s 00:12:26.669 user 0m0.072s 00:12:26.669 sys 0m0.103s 00:12:26.669 06:38:39 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:26.670 06:38:39 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:12:26.670 06:38:39 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:12:26.670 06:38:39 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:26.670 06:38:39 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:26.670 06:38:39 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:26.670 ************************************ 00:12:26.670 START TEST nvme_multi_secondary 00:12:26.670 ************************************ 00:12:26.670 06:38:39 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:12:26.670 06:38:39 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=64046 00:12:26.670 06:38:39 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:12:26.670 06:38:39 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=64047 00:12:26.670 06:38:39 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:12:26.670 06:38:39 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:12:29.959 Initializing NVMe Controllers 00:12:29.959 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:29.959 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:29.959 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:29.959 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:29.959 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:12:29.959 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:12:29.959 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:12:29.959 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:12:29.959 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:12:29.959 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:12:29.959 Initialization complete. Launching workers. 00:12:29.959 ======================================================== 00:12:29.959 Latency(us) 00:12:29.959 Device Information : IOPS MiB/s Average min max 00:12:29.959 PCIE (0000:00:10.0) NSID 1 from core 2: 1747.35 6.83 9154.33 1663.05 30134.50 00:12:29.959 PCIE (0000:00:11.0) NSID 1 from core 2: 1747.35 6.83 9157.53 1934.69 38542.54 00:12:29.959 PCIE (0000:00:13.0) NSID 1 from core 2: 1747.35 6.83 9159.14 1853.12 34771.88 00:12:29.959 PCIE (0000:00:12.0) NSID 1 from core 2: 1747.35 6.83 9159.43 1874.40 28104.20 00:12:29.959 PCIE (0000:00:12.0) NSID 2 from core 2: 1747.35 6.83 9159.92 1940.62 30056.26 00:12:29.959 PCIE (0000:00:12.0) NSID 3 from core 2: 1747.35 6.83 9159.12 1931.51 29165.33 00:12:29.959 ======================================================== 00:12:29.959 Total : 10484.11 40.95 9158.25 1663.05 38542.54 00:12:29.959 00:12:29.959 06:38:42 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 64046 00:12:29.959 Initializing NVMe Controllers 00:12:29.959 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:29.959 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:29.959 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:29.959 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:29.959 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:12:29.959 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:12:29.959 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:12:29.959 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:12:29.959 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:12:29.959 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:12:29.959 Initialization complete. Launching workers. 00:12:29.959 ======================================================== 00:12:29.959 Latency(us) 00:12:29.959 Device Information : IOPS MiB/s Average min max 00:12:29.959 PCIE (0000:00:10.0) NSID 1 from core 1: 4141.32 16.18 3861.63 1710.62 13411.90 00:12:29.959 PCIE (0000:00:11.0) NSID 1 from core 1: 4141.32 16.18 3862.96 1658.09 12715.46 00:12:29.959 PCIE (0000:00:13.0) NSID 1 from core 1: 4141.32 16.18 3863.11 1521.37 12532.37 00:12:29.959 PCIE (0000:00:12.0) NSID 1 from core 1: 4141.32 16.18 3863.32 1669.36 12898.42 00:12:29.959 PCIE (0000:00:12.0) NSID 2 from core 1: 4141.32 16.18 3863.45 1564.97 13215.69 00:12:29.959 PCIE (0000:00:12.0) NSID 3 from core 1: 4141.32 16.18 3863.96 1551.50 13258.44 00:12:29.959 ======================================================== 00:12:29.959 Total : 24847.91 97.06 3863.07 1521.37 13411.90 00:12:29.959 00:12:31.857 Initializing NVMe Controllers 00:12:31.857 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:31.857 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:31.857 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:31.857 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:31.857 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:12:31.857 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:12:31.857 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:12:31.857 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:12:31.857 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:12:31.857 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:12:31.857 Initialization complete. Launching workers. 00:12:31.857 ======================================================== 00:12:31.857 Latency(us) 00:12:31.857 Device Information : IOPS MiB/s Average min max 00:12:31.857 PCIE (0000:00:10.0) NSID 1 from core 0: 8396.49 32.80 1904.19 706.94 14690.84 00:12:31.857 PCIE (0000:00:11.0) NSID 1 from core 0: 8396.49 32.80 1905.12 725.40 12853.96 00:12:31.857 PCIE (0000:00:13.0) NSID 1 from core 0: 8396.49 32.80 1905.08 674.06 12884.24 00:12:31.857 PCIE (0000:00:12.0) NSID 1 from core 0: 8396.49 32.80 1905.05 630.74 11309.51 00:12:31.857 PCIE (0000:00:12.0) NSID 2 from core 0: 8396.49 32.80 1905.02 592.83 13560.46 00:12:31.857 PCIE (0000:00:12.0) NSID 3 from core 0: 8396.49 32.80 1904.99 585.58 13895.12 00:12:31.857 ======================================================== 00:12:31.857 Total : 50378.95 196.79 1904.91 585.58 14690.84 00:12:31.857 00:12:31.857 06:38:44 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 64047 00:12:31.857 06:38:44 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=64116 00:12:31.857 06:38:44 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:12:31.857 06:38:44 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=64117 00:12:31.857 06:38:44 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:12:31.857 06:38:44 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:12:35.136 Initializing NVMe Controllers 00:12:35.136 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:35.136 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:35.137 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:35.137 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:35.137 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:12:35.137 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:12:35.137 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:12:35.137 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:12:35.137 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:12:35.137 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:12:35.137 Initialization complete. Launching workers. 00:12:35.137 ======================================================== 00:12:35.137 Latency(us) 00:12:35.137 Device Information : IOPS MiB/s Average min max 00:12:35.137 PCIE (0000:00:10.0) NSID 1 from core 1: 7739.41 30.23 2065.93 729.04 7619.34 00:12:35.137 PCIE (0000:00:11.0) NSID 1 from core 1: 7739.41 30.23 2067.09 761.38 8113.82 00:12:35.137 PCIE (0000:00:13.0) NSID 1 from core 1: 7739.41 30.23 2067.16 752.05 8434.27 00:12:35.137 PCIE (0000:00:12.0) NSID 1 from core 1: 7739.41 30.23 2067.20 749.93 8268.69 00:12:35.137 PCIE (0000:00:12.0) NSID 2 from core 1: 7739.41 30.23 2067.18 754.85 7260.85 00:12:35.137 PCIE (0000:00:12.0) NSID 3 from core 1: 7739.41 30.23 2067.33 740.08 7361.48 00:12:35.137 ======================================================== 00:12:35.137 Total : 46436.45 181.39 2066.98 729.04 8434.27 00:12:35.137 00:12:35.394 Initializing NVMe Controllers 00:12:35.394 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:35.395 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:35.395 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:35.395 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:35.395 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:12:35.395 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:12:35.395 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:12:35.395 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:12:35.395 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:12:35.395 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:12:35.395 Initialization complete. Launching workers. 00:12:35.395 ======================================================== 00:12:35.395 Latency(us) 00:12:35.395 Device Information : IOPS MiB/s Average min max 00:12:35.395 PCIE (0000:00:10.0) NSID 1 from core 0: 7831.28 30.59 2041.68 736.88 6074.04 00:12:35.395 PCIE (0000:00:11.0) NSID 1 from core 0: 7831.28 30.59 2042.67 750.51 5907.23 00:12:35.395 PCIE (0000:00:13.0) NSID 1 from core 0: 7831.28 30.59 2042.62 746.84 5965.04 00:12:35.395 PCIE (0000:00:12.0) NSID 1 from core 0: 7831.28 30.59 2042.59 749.51 5535.60 00:12:35.395 PCIE (0000:00:12.0) NSID 2 from core 0: 7831.28 30.59 2042.61 752.04 5395.93 00:12:35.395 PCIE (0000:00:12.0) NSID 3 from core 0: 7831.28 30.59 2042.57 743.99 5764.96 00:12:35.395 ======================================================== 00:12:35.395 Total : 46987.69 183.55 2042.46 736.88 6074.04 00:12:35.395 00:12:37.292 Initializing NVMe Controllers 00:12:37.292 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:37.292 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:37.292 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:37.292 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:37.292 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:12:37.292 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:12:37.292 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:12:37.292 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:12:37.292 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:12:37.292 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:12:37.292 Initialization complete. Launching workers. 00:12:37.292 ======================================================== 00:12:37.292 Latency(us) 00:12:37.292 Device Information : IOPS MiB/s Average min max 00:12:37.292 PCIE (0000:00:10.0) NSID 1 from core 2: 4572.34 17.86 3495.24 723.48 15147.29 00:12:37.292 PCIE (0000:00:11.0) NSID 1 from core 2: 4572.34 17.86 3495.54 727.35 14729.61 00:12:37.292 PCIE (0000:00:13.0) NSID 1 from core 2: 4572.34 17.86 3495.48 770.77 14181.19 00:12:37.292 PCIE (0000:00:12.0) NSID 1 from core 2: 4572.34 17.86 3495.60 722.70 16551.98 00:12:37.292 PCIE (0000:00:12.0) NSID 2 from core 2: 4572.34 17.86 3495.37 670.34 14284.29 00:12:37.292 PCIE (0000:00:12.0) NSID 3 from core 2: 4572.34 17.86 3495.68 627.52 14945.33 00:12:37.292 ======================================================== 00:12:37.292 Total : 27434.05 107.16 3495.48 627.52 16551.98 00:12:37.292 00:12:37.292 ************************************ 00:12:37.292 END TEST nvme_multi_secondary 00:12:37.292 ************************************ 00:12:37.292 06:38:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 64116 00:12:37.292 06:38:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 64117 00:12:37.292 00:12:37.292 real 0m10.724s 00:12:37.292 user 0m18.382s 00:12:37.292 sys 0m0.659s 00:12:37.292 06:38:49 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:37.292 06:38:49 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:12:37.292 06:38:49 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:12:37.292 06:38:49 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:12:37.292 06:38:49 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/63079 ]] 00:12:37.292 06:38:49 nvme -- common/autotest_common.sh@1094 -- # kill 63079 00:12:37.292 06:38:49 nvme -- common/autotest_common.sh@1095 -- # wait 63079 00:12:37.292 [2024-12-06 06:38:49.979812] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63989) is not found. Dropping the request. 00:12:37.292 [2024-12-06 06:38:49.979907] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63989) is not found. Dropping the request. 00:12:37.292 [2024-12-06 06:38:49.979945] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63989) is not found. Dropping the request. 00:12:37.292 [2024-12-06 06:38:49.979968] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63989) is not found. Dropping the request. 00:12:37.292 [2024-12-06 06:38:49.981742] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63989) is not found. Dropping the request. 00:12:37.292 [2024-12-06 06:38:49.981778] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63989) is not found. Dropping the request. 00:12:37.292 [2024-12-06 06:38:49.981789] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63989) is not found. Dropping the request. 00:12:37.292 [2024-12-06 06:38:49.981800] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63989) is not found. Dropping the request. 00:12:37.292 [2024-12-06 06:38:49.983363] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63989) is not found. Dropping the request. 00:12:37.292 [2024-12-06 06:38:49.983398] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63989) is not found. Dropping the request. 00:12:37.292 [2024-12-06 06:38:49.983409] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63989) is not found. Dropping the request. 00:12:37.292 [2024-12-06 06:38:49.983421] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63989) is not found. Dropping the request. 00:12:37.292 [2024-12-06 06:38:49.984992] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63989) is not found. Dropping the request. 00:12:37.292 [2024-12-06 06:38:49.985031] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63989) is not found. Dropping the request. 00:12:37.292 [2024-12-06 06:38:49.985042] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63989) is not found. Dropping the request. 00:12:37.292 [2024-12-06 06:38:49.985054] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63989) is not found. Dropping the request. 00:12:37.550 06:38:50 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:12:37.550 06:38:50 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:12:37.550 06:38:50 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:12:37.550 06:38:50 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:37.550 06:38:50 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:37.550 06:38:50 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:37.550 ************************************ 00:12:37.550 START TEST bdev_nvme_reset_stuck_adm_cmd 00:12:37.550 ************************************ 00:12:37.550 06:38:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:12:37.550 * Looking for test storage... 00:12:37.550 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:37.550 06:38:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:37.550 06:38:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lcov --version 00:12:37.550 06:38:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:37.550 06:38:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:37.550 06:38:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:37.550 06:38:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:37.550 06:38:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:37.550 06:38:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:12:37.550 06:38:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:12:37.550 06:38:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:12:37.550 06:38:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:12:37.550 06:38:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:12:37.550 06:38:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:12:37.551 06:38:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:12:37.551 06:38:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:37.551 06:38:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:12:37.551 06:38:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:12:37.551 06:38:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:37.551 06:38:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:37.551 06:38:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:12:37.551 06:38:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:12:37.551 06:38:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:37.551 06:38:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:12:37.551 06:38:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:12:37.551 06:38:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:12:37.551 06:38:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:12:37.551 06:38:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:37.551 06:38:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:12:37.551 06:38:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:12:37.551 06:38:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:37.551 06:38:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:37.551 06:38:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:12:37.551 06:38:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:37.551 06:38:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:37.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.551 --rc genhtml_branch_coverage=1 00:12:37.551 --rc genhtml_function_coverage=1 00:12:37.551 --rc genhtml_legend=1 00:12:37.551 --rc geninfo_all_blocks=1 00:12:37.551 --rc geninfo_unexecuted_blocks=1 00:12:37.551 00:12:37.551 ' 00:12:37.551 06:38:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:37.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.551 --rc genhtml_branch_coverage=1 00:12:37.551 --rc genhtml_function_coverage=1 00:12:37.551 --rc genhtml_legend=1 00:12:37.551 --rc geninfo_all_blocks=1 00:12:37.551 --rc geninfo_unexecuted_blocks=1 00:12:37.551 00:12:37.551 ' 00:12:37.551 06:38:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:37.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.551 --rc genhtml_branch_coverage=1 00:12:37.551 --rc genhtml_function_coverage=1 00:12:37.551 --rc genhtml_legend=1 00:12:37.551 --rc geninfo_all_blocks=1 00:12:37.551 --rc geninfo_unexecuted_blocks=1 00:12:37.551 00:12:37.551 ' 00:12:37.551 06:38:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:37.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.551 --rc genhtml_branch_coverage=1 00:12:37.551 --rc genhtml_function_coverage=1 00:12:37.551 --rc genhtml_legend=1 00:12:37.551 --rc geninfo_all_blocks=1 00:12:37.551 --rc geninfo_unexecuted_blocks=1 00:12:37.551 00:12:37.551 ' 00:12:37.551 06:38:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:12:37.551 06:38:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:12:37.551 06:38:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:12:37.551 06:38:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:12:37.551 06:38:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:12:37.551 06:38:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:12:37.551 06:38:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:12:37.551 06:38:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:12:37.551 06:38:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:12:37.551 06:38:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:12:37.551 06:38:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:12:37.551 06:38:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:12:37.551 06:38:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:37.551 06:38:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:37.551 06:38:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:12:37.809 06:38:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:12:37.809 06:38:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:37.809 06:38:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:12:37.809 06:38:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:12:37.809 06:38:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:12:37.809 06:38:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=64278 00:12:37.809 06:38:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:37.809 06:38:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:12:37.809 06:38:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 64278 00:12:37.809 06:38:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 64278 ']' 00:12:37.809 06:38:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:37.809 06:38:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:37.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:37.809 06:38:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:37.809 06:38:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:37.809 06:38:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:37.809 [2024-12-06 06:38:50.409218] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:12:37.809 [2024-12-06 06:38:50.409341] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64278 ] 00:12:38.066 [2024-12-06 06:38:50.579404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:38.066 [2024-12-06 06:38:50.683483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:38.066 [2024-12-06 06:38:50.683662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:38.066 [2024-12-06 06:38:50.684152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:38.066 [2024-12-06 06:38:50.684277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.632 06:38:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:38.632 06:38:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:12:38.632 06:38:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:12:38.632 06:38:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.632 06:38:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:38.632 nvme0n1 00:12:38.632 06:38:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.632 06:38:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:12:38.632 06:38:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_lWenc.txt 00:12:38.632 06:38:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:12:38.632 06:38:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.632 06:38:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:38.632 true 00:12:38.632 06:38:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.632 06:38:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:12:38.632 06:38:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1733467131 00:12:38.632 06:38:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=64301 00:12:38.632 06:38:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:38.632 06:38:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:12:38.632 06:38:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:12:41.192 06:38:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:12:41.192 06:38:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.192 06:38:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:41.192 [2024-12-06 06:38:53.377676] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:12:41.192 [2024-12-06 06:38:53.378280] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:12:41.192 [2024-12-06 06:38:53.378329] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:41.192 [2024-12-06 06:38:53.378345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:41.192 [2024-12-06 06:38:53.380032] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:12:41.192 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 64301 00:12:41.192 06:38:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.192 06:38:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 64301 00:12:41.192 06:38:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 64301 00:12:41.192 06:38:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:12:41.192 06:38:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:12:41.192 06:38:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:12:41.192 06:38:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.192 06:38:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:41.192 06:38:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.192 06:38:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:12:41.192 06:38:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_lWenc.txt 00:12:41.192 06:38:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:12:41.192 06:38:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:12:41.192 06:38:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:12:41.192 06:38:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:12:41.192 06:38:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:12:41.192 06:38:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:12:41.192 06:38:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:12:41.192 06:38:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:12:41.192 06:38:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:12:41.192 06:38:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:12:41.192 06:38:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:12:41.192 06:38:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:12:41.192 06:38:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:12:41.192 06:38:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:12:41.192 06:38:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:12:41.192 06:38:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:12:41.192 06:38:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:12:41.192 06:38:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:12:41.192 06:38:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:12:41.193 06:38:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_lWenc.txt 00:12:41.193 06:38:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 64278 00:12:41.193 06:38:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 64278 ']' 00:12:41.193 06:38:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 64278 00:12:41.193 06:38:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:12:41.193 06:38:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:41.193 06:38:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64278 00:12:41.193 killing process with pid 64278 00:12:41.193 06:38:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:41.193 06:38:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:41.193 06:38:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64278' 00:12:41.193 06:38:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 64278 00:12:41.193 06:38:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 64278 00:12:42.581 06:38:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:12:42.581 06:38:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:12:42.581 00:12:42.581 real 0m4.894s 00:12:42.581 user 0m17.382s 00:12:42.581 sys 0m0.492s 00:12:42.581 06:38:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:42.581 06:38:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:42.581 ************************************ 00:12:42.581 END TEST bdev_nvme_reset_stuck_adm_cmd 00:12:42.581 ************************************ 00:12:42.581 06:38:55 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:12:42.581 06:38:55 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:12:42.581 06:38:55 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:42.581 06:38:55 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:42.581 06:38:55 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:42.581 ************************************ 00:12:42.581 START TEST nvme_fio 00:12:42.581 ************************************ 00:12:42.581 06:38:55 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:12:42.581 06:38:55 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:12:42.581 06:38:55 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:12:42.581 06:38:55 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:12:42.581 06:38:55 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:12:42.581 06:38:55 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:12:42.581 06:38:55 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:42.581 06:38:55 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:42.581 06:38:55 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:12:42.581 06:38:55 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:12:42.581 06:38:55 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:42.581 06:38:55 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:12:42.581 06:38:55 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:12:42.581 06:38:55 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:12:42.581 06:38:55 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:12:42.581 06:38:55 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:12:42.839 06:38:55 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:12:42.839 06:38:55 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:12:42.839 06:38:55 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:12:42.839 06:38:55 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:12:42.839 06:38:55 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:12:42.839 06:38:55 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:42.839 06:38:55 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:42.839 06:38:55 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:42.839 06:38:55 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:42.839 06:38:55 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:12:42.839 06:38:55 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:42.839 06:38:55 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:42.839 06:38:55 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:12:42.839 06:38:55 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:42.839 06:38:55 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:43.096 06:38:55 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:43.096 06:38:55 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:43.096 06:38:55 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:12:43.096 06:38:55 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:12:43.096 06:38:55 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:12:43.096 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:12:43.096 fio-3.35 00:12:43.096 Starting 1 thread 00:12:53.089 00:12:53.089 test: (groupid=0, jobs=1): err= 0: pid=64441: Fri Dec 6 06:39:04 2024 00:12:53.089 read: IOPS=23.9k, BW=93.3MiB/s (97.8MB/s)(187MiB/2001msec) 00:12:53.089 slat (nsec): min=4199, max=46867, avg=4994.09, stdev=2059.92 00:12:53.089 clat (usec): min=226, max=9054, avg=2675.80, stdev=788.47 00:12:53.089 lat (usec): min=230, max=9099, avg=2680.80, stdev=789.77 00:12:53.089 clat percentiles (usec): 00:12:53.089 | 1.00th=[ 1745], 5.00th=[ 2147], 10.00th=[ 2311], 20.00th=[ 2376], 00:12:53.089 | 30.00th=[ 2409], 40.00th=[ 2442], 50.00th=[ 2474], 60.00th=[ 2507], 00:12:53.089 | 70.00th=[ 2540], 80.00th=[ 2606], 90.00th=[ 3228], 95.00th=[ 4621], 00:12:53.089 | 99.00th=[ 6128], 99.50th=[ 6652], 99.90th=[ 7439], 99.95th=[ 7504], 00:12:53.089 | 99.99th=[ 8979] 00:12:53.089 bw ( KiB/s): min=89432, max=99600, per=99.34%, avg=94877.33, stdev=5122.38, samples=3 00:12:53.089 iops : min=22358, max=24900, avg=23719.33, stdev=1280.59, samples=3 00:12:53.089 write: IOPS=23.7k, BW=92.7MiB/s (97.2MB/s)(185MiB/2001msec); 0 zone resets 00:12:53.089 slat (nsec): min=4260, max=72021, avg=5284.45, stdev=2137.95 00:12:53.089 clat (usec): min=243, max=8997, avg=2681.84, stdev=798.63 00:12:53.089 lat (usec): min=248, max=9009, avg=2687.12, stdev=799.91 00:12:53.089 clat percentiles (usec): 00:12:53.089 | 1.00th=[ 1745], 5.00th=[ 2147], 10.00th=[ 2311], 20.00th=[ 2376], 00:12:53.089 | 30.00th=[ 2409], 40.00th=[ 2442], 50.00th=[ 2474], 60.00th=[ 2507], 00:12:53.089 | 70.00th=[ 2540], 80.00th=[ 2606], 90.00th=[ 3228], 95.00th=[ 4686], 00:12:53.089 | 99.00th=[ 6259], 99.50th=[ 6718], 99.90th=[ 7439], 99.95th=[ 7570], 00:12:53.089 | 99.99th=[ 8848] 00:12:53.089 bw ( KiB/s): min=88280, max=100920, per=99.97%, avg=94888.00, stdev=6339.66, samples=3 00:12:53.089 iops : min=22070, max=25230, avg=23722.00, stdev=1584.91, samples=3 00:12:53.089 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.03% 00:12:53.089 lat (msec) : 2=2.83%, 4=90.29%, 10=6.82% 00:12:53.089 cpu : usr=99.30%, sys=0.00%, ctx=3, majf=0, minf=609 00:12:53.089 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:12:53.089 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:53.089 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:53.089 issued rwts: total=47778,47482,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:53.089 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:53.089 00:12:53.089 Run status group 0 (all jobs): 00:12:53.089 READ: bw=93.3MiB/s (97.8MB/s), 93.3MiB/s-93.3MiB/s (97.8MB/s-97.8MB/s), io=187MiB (196MB), run=2001-2001msec 00:12:53.089 WRITE: bw=92.7MiB/s (97.2MB/s), 92.7MiB/s-92.7MiB/s (97.2MB/s-97.2MB/s), io=185MiB (194MB), run=2001-2001msec 00:12:53.089 ----------------------------------------------------- 00:12:53.089 Suppressions used: 00:12:53.089 count bytes template 00:12:53.089 1 32 /usr/src/fio/parse.c 00:12:53.089 1 8 libtcmalloc_minimal.so 00:12:53.089 ----------------------------------------------------- 00:12:53.089 00:12:53.089 06:39:04 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:12:53.089 06:39:04 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:12:53.089 06:39:04 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:12:53.089 06:39:04 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:12:53.089 06:39:05 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:12:53.089 06:39:05 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:12:53.089 06:39:05 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:12:53.089 06:39:05 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:12:53.089 06:39:05 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:12:53.089 06:39:05 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:53.089 06:39:05 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:53.089 06:39:05 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:53.089 06:39:05 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:53.089 06:39:05 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:12:53.089 06:39:05 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:53.089 06:39:05 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:53.089 06:39:05 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:53.089 06:39:05 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:12:53.089 06:39:05 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:53.089 06:39:05 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:53.089 06:39:05 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:53.089 06:39:05 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:12:53.089 06:39:05 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:12:53.089 06:39:05 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:12:53.089 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:12:53.089 fio-3.35 00:12:53.089 Starting 1 thread 00:13:03.058 00:13:03.058 test: (groupid=0, jobs=1): err= 0: pid=64500: Fri Dec 6 06:39:13 2024 00:13:03.058 read: IOPS=21.4k, BW=83.5MiB/s (87.6MB/s)(167MiB/2001msec) 00:13:03.058 slat (nsec): min=4194, max=76523, avg=5176.44, stdev=2183.12 00:13:03.058 clat (usec): min=201, max=9410, avg=2988.41, stdev=1062.76 00:13:03.058 lat (usec): min=206, max=9428, avg=2993.59, stdev=1063.86 00:13:03.058 clat percentiles (usec): 00:13:03.058 | 1.00th=[ 2073], 5.00th=[ 2343], 10.00th=[ 2376], 20.00th=[ 2409], 00:13:03.058 | 30.00th=[ 2442], 40.00th=[ 2474], 50.00th=[ 2507], 60.00th=[ 2573], 00:13:03.058 | 70.00th=[ 2769], 80.00th=[ 3458], 90.00th=[ 4293], 95.00th=[ 5604], 00:13:03.058 | 99.00th=[ 6980], 99.50th=[ 7439], 99.90th=[ 8356], 99.95th=[ 8586], 00:13:03.058 | 99.99th=[ 8717] 00:13:03.058 bw ( KiB/s): min=66200, max=95536, per=94.26%, avg=80640.00, stdev=14673.32, samples=3 00:13:03.058 iops : min=16550, max=23884, avg=20160.67, stdev=3668.30, samples=3 00:13:03.058 write: IOPS=21.2k, BW=82.9MiB/s (86.9MB/s)(166MiB/2001msec); 0 zone resets 00:13:03.058 slat (nsec): min=4297, max=75131, avg=5484.19, stdev=2297.09 00:13:03.058 clat (usec): min=234, max=9311, avg=2994.77, stdev=1064.82 00:13:03.058 lat (usec): min=239, max=9329, avg=3000.26, stdev=1065.95 00:13:03.058 clat percentiles (usec): 00:13:03.058 | 1.00th=[ 2073], 5.00th=[ 2343], 10.00th=[ 2376], 20.00th=[ 2409], 00:13:03.058 | 30.00th=[ 2442], 40.00th=[ 2474], 50.00th=[ 2507], 60.00th=[ 2573], 00:13:03.058 | 70.00th=[ 2802], 80.00th=[ 3490], 90.00th=[ 4293], 95.00th=[ 5669], 00:13:03.058 | 99.00th=[ 7046], 99.50th=[ 7504], 99.90th=[ 8291], 99.95th=[ 8586], 00:13:03.058 | 99.99th=[ 9241] 00:13:03.058 bw ( KiB/s): min=66496, max=95592, per=95.07%, avg=80720.00, stdev=14558.82, samples=3 00:13:03.058 iops : min=16624, max=23898, avg=20180.00, stdev=3639.70, samples=3 00:13:03.058 lat (usec) : 250=0.01%, 500=0.01%, 750=0.03%, 1000=0.01% 00:13:03.058 lat (msec) : 2=0.67%, 4=86.09%, 10=13.19% 00:13:03.058 cpu : usr=99.20%, sys=0.05%, ctx=3, majf=0, minf=608 00:13:03.058 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:13:03.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:03.058 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:03.058 issued rwts: total=42796,42475,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:03.058 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:03.058 00:13:03.058 Run status group 0 (all jobs): 00:13:03.058 READ: bw=83.5MiB/s (87.6MB/s), 83.5MiB/s-83.5MiB/s (87.6MB/s-87.6MB/s), io=167MiB (175MB), run=2001-2001msec 00:13:03.058 WRITE: bw=82.9MiB/s (86.9MB/s), 82.9MiB/s-82.9MiB/s (86.9MB/s-86.9MB/s), io=166MiB (174MB), run=2001-2001msec 00:13:03.058 ----------------------------------------------------- 00:13:03.058 Suppressions used: 00:13:03.058 count bytes template 00:13:03.058 1 32 /usr/src/fio/parse.c 00:13:03.058 1 8 libtcmalloc_minimal.so 00:13:03.058 ----------------------------------------------------- 00:13:03.058 00:13:03.058 06:39:14 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:13:03.058 06:39:14 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:13:03.058 06:39:14 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:13:03.058 06:39:14 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:13:03.058 06:39:14 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:13:03.058 06:39:14 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:13:03.058 06:39:14 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:13:03.058 06:39:14 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:13:03.058 06:39:14 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:13:03.058 06:39:14 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:03.058 06:39:14 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:03.058 06:39:14 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:03.058 06:39:14 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:03.058 06:39:14 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:13:03.058 06:39:14 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:03.058 06:39:14 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:03.058 06:39:14 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:13:03.058 06:39:14 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:03.058 06:39:14 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:03.058 06:39:14 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:03.058 06:39:14 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:03.058 06:39:14 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:13:03.058 06:39:14 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:03.058 06:39:14 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:13:03.058 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:13:03.058 fio-3.35 00:13:03.058 Starting 1 thread 00:13:13.025 00:13:13.025 test: (groupid=0, jobs=1): err= 0: pid=64567: Fri Dec 6 06:39:25 2024 00:13:13.025 read: IOPS=23.5k, BW=91.8MiB/s (96.3MB/s)(184MiB/2001msec) 00:13:13.025 slat (nsec): min=3339, max=64648, avg=4964.09, stdev=2113.88 00:13:13.025 clat (usec): min=476, max=11079, avg=2720.53, stdev=781.51 00:13:13.025 lat (usec): min=485, max=11127, avg=2725.49, stdev=782.77 00:13:13.025 clat percentiles (usec): 00:13:13.026 | 1.00th=[ 1958], 5.00th=[ 2180], 10.00th=[ 2278], 20.00th=[ 2376], 00:13:13.026 | 30.00th=[ 2442], 40.00th=[ 2442], 50.00th=[ 2474], 60.00th=[ 2507], 00:13:13.026 | 70.00th=[ 2573], 80.00th=[ 2737], 90.00th=[ 3458], 95.00th=[ 4555], 00:13:13.026 | 99.00th=[ 5997], 99.50th=[ 6652], 99.90th=[ 7832], 99.95th=[ 8225], 00:13:13.026 | 99.99th=[10814] 00:13:13.026 bw ( KiB/s): min=91928, max=96256, per=99.82%, avg=93882.67, stdev=2194.16, samples=3 00:13:13.026 iops : min=22982, max=24064, avg=23470.67, stdev=548.54, samples=3 00:13:13.026 write: IOPS=23.3k, BW=91.2MiB/s (95.6MB/s)(183MiB/2001msec); 0 zone resets 00:13:13.026 slat (nsec): min=3444, max=87531, avg=5255.71, stdev=2173.33 00:13:13.026 clat (usec): min=438, max=10899, avg=2721.71, stdev=784.06 00:13:13.026 lat (usec): min=447, max=10914, avg=2726.96, stdev=785.31 00:13:13.026 clat percentiles (usec): 00:13:13.026 | 1.00th=[ 1958], 5.00th=[ 2180], 10.00th=[ 2278], 20.00th=[ 2376], 00:13:13.026 | 30.00th=[ 2442], 40.00th=[ 2474], 50.00th=[ 2474], 60.00th=[ 2507], 00:13:13.026 | 70.00th=[ 2573], 80.00th=[ 2737], 90.00th=[ 3425], 95.00th=[ 4555], 00:13:13.026 | 99.00th=[ 6128], 99.50th=[ 6718], 99.90th=[ 7898], 99.95th=[ 8291], 00:13:13.026 | 99.99th=[10421] 00:13:13.026 bw ( KiB/s): min=91744, max=95192, per=100.00%, avg=93970.67, stdev=1931.37, samples=3 00:13:13.026 iops : min=22936, max=23798, avg=23492.67, stdev=482.84, samples=3 00:13:13.026 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.02% 00:13:13.026 lat (msec) : 2=1.12%, 4=91.59%, 10=7.25%, 20=0.02% 00:13:13.026 cpu : usr=99.15%, sys=0.15%, ctx=7, majf=0, minf=608 00:13:13.026 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:13:13.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:13.026 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:13.026 issued rwts: total=47047,46725,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:13.026 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:13.026 00:13:13.026 Run status group 0 (all jobs): 00:13:13.026 READ: bw=91.8MiB/s (96.3MB/s), 91.8MiB/s-91.8MiB/s (96.3MB/s-96.3MB/s), io=184MiB (193MB), run=2001-2001msec 00:13:13.026 WRITE: bw=91.2MiB/s (95.6MB/s), 91.2MiB/s-91.2MiB/s (95.6MB/s-95.6MB/s), io=183MiB (191MB), run=2001-2001msec 00:13:13.026 ----------------------------------------------------- 00:13:13.026 Suppressions used: 00:13:13.026 count bytes template 00:13:13.026 1 32 /usr/src/fio/parse.c 00:13:13.026 1 8 libtcmalloc_minimal.so 00:13:13.026 ----------------------------------------------------- 00:13:13.026 00:13:13.026 06:39:25 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:13:13.026 06:39:25 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:13:13.026 06:39:25 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:13:13.026 06:39:25 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:13:13.283 06:39:25 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:13:13.283 06:39:25 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:13:13.540 06:39:26 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:13:13.540 06:39:26 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:13:13.540 06:39:26 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:13:13.540 06:39:26 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:13.540 06:39:26 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:13.540 06:39:26 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:13.540 06:39:26 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:13.540 06:39:26 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:13:13.540 06:39:26 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:13.540 06:39:26 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:13.540 06:39:26 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:13.540 06:39:26 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:13.540 06:39:26 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:13:13.540 06:39:26 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:13.540 06:39:26 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:13.540 06:39:26 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:13:13.540 06:39:26 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:13.540 06:39:26 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:13:13.798 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:13:13.798 fio-3.35 00:13:13.798 Starting 1 thread 00:13:28.670 00:13:28.670 test: (groupid=0, jobs=1): err= 0: pid=64622: Fri Dec 6 06:39:40 2024 00:13:28.670 read: IOPS=24.7k, BW=96.3MiB/s (101MB/s)(193MiB/2001msec) 00:13:28.670 slat (nsec): min=3364, max=65895, avg=4837.55, stdev=1982.84 00:13:28.670 clat (usec): min=280, max=13871, avg=2589.29, stdev=667.53 00:13:28.670 lat (usec): min=285, max=13932, avg=2594.12, stdev=668.73 00:13:28.670 clat percentiles (usec): 00:13:28.670 | 1.00th=[ 1713], 5.00th=[ 2212], 10.00th=[ 2343], 20.00th=[ 2376], 00:13:28.670 | 30.00th=[ 2409], 40.00th=[ 2442], 50.00th=[ 2442], 60.00th=[ 2474], 00:13:28.670 | 70.00th=[ 2507], 80.00th=[ 2573], 90.00th=[ 2802], 95.00th=[ 3392], 00:13:28.670 | 99.00th=[ 5932], 99.50th=[ 6390], 99.90th=[ 7570], 99.95th=[ 9634], 00:13:28.670 | 99.99th=[13566] 00:13:28.670 bw ( KiB/s): min=95032, max=104440, per=100.00%, avg=100186.67, stdev=4768.32, samples=3 00:13:28.670 iops : min=23758, max=26110, avg=25046.67, stdev=1192.08, samples=3 00:13:28.670 write: IOPS=24.5k, BW=95.7MiB/s (100MB/s)(192MiB/2001msec); 0 zone resets 00:13:28.670 slat (usec): min=3, max=174, avg= 5.16, stdev= 2.15 00:13:28.670 clat (usec): min=272, max=13658, avg=2595.34, stdev=680.03 00:13:28.670 lat (usec): min=276, max=13683, avg=2600.50, stdev=681.26 00:13:28.670 clat percentiles (usec): 00:13:28.670 | 1.00th=[ 1745], 5.00th=[ 2245], 10.00th=[ 2343], 20.00th=[ 2376], 00:13:28.670 | 30.00th=[ 2409], 40.00th=[ 2442], 50.00th=[ 2442], 60.00th=[ 2474], 00:13:28.670 | 70.00th=[ 2507], 80.00th=[ 2573], 90.00th=[ 2835], 95.00th=[ 3425], 00:13:28.670 | 99.00th=[ 5932], 99.50th=[ 6390], 99.90th=[ 7635], 99.95th=[10159], 00:13:28.670 | 99.99th=[13173] 00:13:28.670 bw ( KiB/s): min=95000, max=104288, per=100.00%, avg=100221.33, stdev=4750.44, samples=3 00:13:28.670 iops : min=23750, max=26072, avg=25055.33, stdev=1187.61, samples=3 00:13:28.670 lat (usec) : 500=0.03%, 750=0.01%, 1000=0.03% 00:13:28.670 lat (msec) : 2=2.47%, 4=93.56%, 10=3.84%, 20=0.05% 00:13:28.670 cpu : usr=99.35%, sys=0.00%, ctx=26, majf=0, minf=606 00:13:28.670 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:13:28.670 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:28.670 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:28.670 issued rwts: total=49349,49044,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:28.670 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:28.670 00:13:28.670 Run status group 0 (all jobs): 00:13:28.670 READ: bw=96.3MiB/s (101MB/s), 96.3MiB/s-96.3MiB/s (101MB/s-101MB/s), io=193MiB (202MB), run=2001-2001msec 00:13:28.670 WRITE: bw=95.7MiB/s (100MB/s), 95.7MiB/s-95.7MiB/s (100MB/s-100MB/s), io=192MiB (201MB), run=2001-2001msec 00:13:28.670 ----------------------------------------------------- 00:13:28.670 Suppressions used: 00:13:28.670 count bytes template 00:13:28.670 1 32 /usr/src/fio/parse.c 00:13:28.670 1 8 libtcmalloc_minimal.so 00:13:28.670 ----------------------------------------------------- 00:13:28.670 00:13:28.670 06:39:40 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:13:28.670 06:39:40 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:13:28.670 00:13:28.670 real 0m45.235s 00:13:28.670 user 0m32.112s 00:13:28.670 sys 0m21.810s 00:13:28.670 06:39:40 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:28.670 06:39:40 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:13:28.670 ************************************ 00:13:28.670 END TEST nvme_fio 00:13:28.670 ************************************ 00:13:28.670 00:13:28.670 real 1m54.778s 00:13:28.670 user 3m53.689s 00:13:28.670 sys 0m32.275s 00:13:28.670 06:39:40 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:28.670 06:39:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:28.670 ************************************ 00:13:28.670 END TEST nvme 00:13:28.670 ************************************ 00:13:28.670 06:39:40 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:13:28.670 06:39:40 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:13:28.670 06:39:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:28.671 06:39:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:28.671 06:39:40 -- common/autotest_common.sh@10 -- # set +x 00:13:28.671 ************************************ 00:13:28.671 START TEST nvme_scc 00:13:28.671 ************************************ 00:13:28.671 06:39:40 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:13:28.671 * Looking for test storage... 00:13:28.671 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:28.671 06:39:40 nvme_scc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:28.671 06:39:40 nvme_scc -- common/autotest_common.sh@1711 -- # lcov --version 00:13:28.671 06:39:40 nvme_scc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:28.671 06:39:40 nvme_scc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:28.671 06:39:40 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:28.671 06:39:40 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:28.671 06:39:40 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:28.671 06:39:40 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:13:28.671 06:39:40 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:13:28.671 06:39:40 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:13:28.671 06:39:40 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:13:28.671 06:39:40 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:13:28.671 06:39:40 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:13:28.671 06:39:40 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:13:28.671 06:39:40 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:28.671 06:39:40 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:13:28.671 06:39:40 nvme_scc -- scripts/common.sh@345 -- # : 1 00:13:28.671 06:39:40 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:28.671 06:39:40 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:28.671 06:39:40 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:13:28.671 06:39:40 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:13:28.671 06:39:40 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:28.671 06:39:40 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:13:28.671 06:39:40 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:13:28.671 06:39:40 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:13:28.671 06:39:40 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:13:28.671 06:39:40 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:28.671 06:39:40 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:13:28.671 06:39:40 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:13:28.671 06:39:40 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:28.671 06:39:40 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:28.671 06:39:40 nvme_scc -- scripts/common.sh@368 -- # return 0 00:13:28.671 06:39:40 nvme_scc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:28.671 06:39:40 nvme_scc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:28.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:28.671 --rc genhtml_branch_coverage=1 00:13:28.671 --rc genhtml_function_coverage=1 00:13:28.671 --rc genhtml_legend=1 00:13:28.671 --rc geninfo_all_blocks=1 00:13:28.671 --rc geninfo_unexecuted_blocks=1 00:13:28.671 00:13:28.671 ' 00:13:28.671 06:39:40 nvme_scc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:28.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:28.671 --rc genhtml_branch_coverage=1 00:13:28.671 --rc genhtml_function_coverage=1 00:13:28.671 --rc genhtml_legend=1 00:13:28.671 --rc geninfo_all_blocks=1 00:13:28.671 --rc geninfo_unexecuted_blocks=1 00:13:28.671 00:13:28.671 ' 00:13:28.671 06:39:40 nvme_scc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:28.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:28.671 --rc genhtml_branch_coverage=1 00:13:28.671 --rc genhtml_function_coverage=1 00:13:28.671 --rc genhtml_legend=1 00:13:28.671 --rc geninfo_all_blocks=1 00:13:28.671 --rc geninfo_unexecuted_blocks=1 00:13:28.671 00:13:28.671 ' 00:13:28.671 06:39:40 nvme_scc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:28.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:28.671 --rc genhtml_branch_coverage=1 00:13:28.671 --rc genhtml_function_coverage=1 00:13:28.671 --rc genhtml_legend=1 00:13:28.671 --rc geninfo_all_blocks=1 00:13:28.671 --rc geninfo_unexecuted_blocks=1 00:13:28.671 00:13:28.671 ' 00:13:28.671 06:39:40 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:28.671 06:39:40 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:28.671 06:39:40 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:13:28.671 06:39:40 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:13:28.671 06:39:40 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:28.671 06:39:40 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:13:28.671 06:39:40 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:28.671 06:39:40 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:28.671 06:39:40 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:28.671 06:39:40 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.671 06:39:40 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.671 06:39:40 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.671 06:39:40 nvme_scc -- paths/export.sh@5 -- # export PATH 00:13:28.671 06:39:40 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.671 06:39:40 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:13:28.671 06:39:40 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:13:28.671 06:39:40 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:13:28.671 06:39:40 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:13:28.671 06:39:40 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:13:28.671 06:39:40 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:13:28.671 06:39:40 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:13:28.671 06:39:40 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:13:28.671 06:39:40 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:13:28.671 06:39:40 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:28.671 06:39:40 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:13:28.671 06:39:40 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:13:28.671 06:39:40 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:13:28.671 06:39:40 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:28.671 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:28.671 Waiting for block devices as requested 00:13:28.671 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:28.671 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:28.671 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:28.671 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:33.953 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:33.953 06:39:46 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:13:33.953 06:39:46 nvme_scc -- scripts/common.sh@18 -- # local i 00:13:33.953 06:39:46 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:13:33.953 06:39:46 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:33.953 06:39:46 nvme_scc -- scripts/common.sh@27 -- # return 0 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.953 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:13:33.954 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.955 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.956 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:33.957 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:13:33.958 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:13:33.959 06:39:46 nvme_scc -- scripts/common.sh@18 -- # local i 00:13:33.959 06:39:46 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:13:33.959 06:39:46 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:33.959 06:39:46 nvme_scc -- scripts/common.sh@27 -- # return 0 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:13:33.959 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:13:33.960 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.961 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:13:33.962 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.963 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.964 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.965 06:39:46 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:13:33.966 06:39:46 nvme_scc -- scripts/common.sh@18 -- # local i 00:13:33.966 06:39:46 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:13:33.966 06:39:46 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:33.966 06:39:46 nvme_scc -- scripts/common.sh@27 -- # return 0 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.966 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.967 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.968 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.969 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.970 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.971 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.972 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.973 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.974 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:13:33.975 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:13:33.976 06:39:46 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.977 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.978 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:33.979 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:13:33.980 06:39:46 nvme_scc -- scripts/common.sh@18 -- # local i 00:13:33.980 06:39:46 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:13:33.980 06:39:46 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:33.980 06:39:46 nvme_scc -- scripts/common.sh@27 -- # return 0 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:13:33.980 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.981 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:13:33.982 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:13:33.983 06:39:46 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:13:33.983 06:39:46 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:13:33.983 06:39:46 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:13:33.983 06:39:46 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:13:33.983 06:39:46 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:34.546 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:34.803 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:34.803 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:34.803 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:34.803 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:35.061 06:39:47 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:13:35.061 06:39:47 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:35.061 06:39:47 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:35.061 06:39:47 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:13:35.061 ************************************ 00:13:35.061 START TEST nvme_simple_copy 00:13:35.061 ************************************ 00:13:35.061 06:39:47 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:13:35.319 Initializing NVMe Controllers 00:13:35.319 Attaching to 0000:00:10.0 00:13:35.319 Controller supports SCC. Attached to 0000:00:10.0 00:13:35.319 Namespace ID: 1 size: 6GB 00:13:35.319 Initialization complete. 00:13:35.319 00:13:35.319 Controller QEMU NVMe Ctrl (12340 ) 00:13:35.319 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:13:35.319 Namespace Block Size:4096 00:13:35.319 Writing LBAs 0 to 63 with Random Data 00:13:35.319 Copied LBAs from 0 - 63 to the Destination LBA 256 00:13:35.319 LBAs matching Written Data: 64 00:13:35.319 00:13:35.319 real 0m0.250s 00:13:35.319 user 0m0.099s 00:13:35.319 sys 0m0.050s 00:13:35.319 06:39:47 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:35.319 06:39:47 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:13:35.319 ************************************ 00:13:35.319 END TEST nvme_simple_copy 00:13:35.319 ************************************ 00:13:35.319 00:13:35.319 real 0m7.505s 00:13:35.319 user 0m1.086s 00:13:35.319 sys 0m1.280s 00:13:35.319 06:39:47 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:35.319 ************************************ 00:13:35.319 END TEST nvme_scc 00:13:35.319 06:39:47 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:13:35.319 ************************************ 00:13:35.319 06:39:47 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:13:35.319 06:39:47 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:13:35.319 06:39:47 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:13:35.319 06:39:47 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:13:35.319 06:39:47 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:13:35.319 06:39:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:35.319 06:39:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:35.319 06:39:47 -- common/autotest_common.sh@10 -- # set +x 00:13:35.319 ************************************ 00:13:35.319 START TEST nvme_fdp 00:13:35.319 ************************************ 00:13:35.319 06:39:47 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:13:35.319 * Looking for test storage... 00:13:35.319 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:35.319 06:39:47 nvme_fdp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:35.319 06:39:47 nvme_fdp -- common/autotest_common.sh@1711 -- # lcov --version 00:13:35.319 06:39:47 nvme_fdp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:35.319 06:39:48 nvme_fdp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:35.319 06:39:48 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:35.319 06:39:48 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:35.319 06:39:48 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:35.319 06:39:48 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:13:35.319 06:39:48 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:13:35.319 06:39:48 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:13:35.319 06:39:48 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:13:35.319 06:39:48 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:13:35.319 06:39:48 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:13:35.319 06:39:48 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:13:35.319 06:39:48 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:35.319 06:39:48 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:13:35.319 06:39:48 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:13:35.319 06:39:48 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:35.319 06:39:48 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:35.319 06:39:48 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:13:35.319 06:39:48 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:13:35.319 06:39:48 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:35.319 06:39:48 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:13:35.319 06:39:48 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:13:35.319 06:39:48 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:13:35.319 06:39:48 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:13:35.319 06:39:48 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:35.319 06:39:48 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:13:35.319 06:39:48 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:13:35.319 06:39:48 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:35.319 06:39:48 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:35.319 06:39:48 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:13:35.319 06:39:48 nvme_fdp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:35.319 06:39:48 nvme_fdp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:35.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.319 --rc genhtml_branch_coverage=1 00:13:35.319 --rc genhtml_function_coverage=1 00:13:35.319 --rc genhtml_legend=1 00:13:35.319 --rc geninfo_all_blocks=1 00:13:35.319 --rc geninfo_unexecuted_blocks=1 00:13:35.319 00:13:35.319 ' 00:13:35.319 06:39:48 nvme_fdp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:35.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.319 --rc genhtml_branch_coverage=1 00:13:35.319 --rc genhtml_function_coverage=1 00:13:35.319 --rc genhtml_legend=1 00:13:35.319 --rc geninfo_all_blocks=1 00:13:35.319 --rc geninfo_unexecuted_blocks=1 00:13:35.319 00:13:35.319 ' 00:13:35.319 06:39:48 nvme_fdp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:35.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.319 --rc genhtml_branch_coverage=1 00:13:35.319 --rc genhtml_function_coverage=1 00:13:35.319 --rc genhtml_legend=1 00:13:35.319 --rc geninfo_all_blocks=1 00:13:35.319 --rc geninfo_unexecuted_blocks=1 00:13:35.319 00:13:35.319 ' 00:13:35.319 06:39:48 nvme_fdp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:35.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.319 --rc genhtml_branch_coverage=1 00:13:35.319 --rc genhtml_function_coverage=1 00:13:35.319 --rc genhtml_legend=1 00:13:35.319 --rc geninfo_all_blocks=1 00:13:35.319 --rc geninfo_unexecuted_blocks=1 00:13:35.319 00:13:35.319 ' 00:13:35.319 06:39:48 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:35.319 06:39:48 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:35.319 06:39:48 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:13:35.319 06:39:48 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:13:35.319 06:39:48 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:35.319 06:39:48 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:13:35.319 06:39:48 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:35.319 06:39:48 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:35.319 06:39:48 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:35.319 06:39:48 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.319 06:39:48 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.319 06:39:48 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.319 06:39:48 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:13:35.319 06:39:48 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.319 06:39:48 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:13:35.319 06:39:48 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:13:35.319 06:39:48 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:13:35.319 06:39:48 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:13:35.319 06:39:48 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:13:35.319 06:39:48 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:13:35.319 06:39:48 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:13:35.319 06:39:48 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:13:35.319 06:39:48 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:13:35.319 06:39:48 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:35.319 06:39:48 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:35.884 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:35.884 Waiting for block devices as requested 00:13:35.884 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:35.884 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:36.140 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:36.141 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:41.414 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:41.414 06:39:53 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:13:41.414 06:39:53 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:13:41.414 06:39:53 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:41.414 06:39:53 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:13:41.414 06:39:53 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:13:41.414 06:39:53 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:13:41.414 06:39:53 nvme_fdp -- scripts/common.sh@18 -- # local i 00:13:41.414 06:39:53 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:13:41.414 06:39:53 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:41.414 06:39:53 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:13:41.414 06:39:53 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:13:41.414 06:39:53 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:13:41.414 06:39:53 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:13:41.414 06:39:53 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:41.414 06:39:53 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:13:41.414 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.414 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:13:41.415 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.416 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.417 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.418 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.419 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:13:41.420 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:13:41.421 06:39:53 nvme_fdp -- scripts/common.sh@18 -- # local i 00:13:41.421 06:39:53 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:13:41.421 06:39:53 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:41.421 06:39:53 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:13:41.421 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.422 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.423 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.424 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:13:41.425 06:39:53 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.426 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:13:41.427 06:39:53 nvme_fdp -- scripts/common.sh@18 -- # local i 00:13:41.427 06:39:53 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:13:41.427 06:39:53 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:41.427 06:39:53 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:13:41.427 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.428 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:13:41.429 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.430 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:41.431 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.432 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:13:41.433 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.434 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.435 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:41.435 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:41.435 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:41.435 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.435 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.435 06:39:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:41.435 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:41.435 06:39:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:41.435 06:39:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.435 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:41.436 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.437 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.438 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.439 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:13:41.440 06:39:54 nvme_fdp -- scripts/common.sh@18 -- # local i 00:13:41.440 06:39:54 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:13:41.440 06:39:54 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:41.440 06:39:54 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.440 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.441 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.442 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:13:41.443 06:39:54 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:13:41.443 06:39:54 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:13:41.443 06:39:54 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:13:41.443 06:39:54 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:13:41.443 06:39:54 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:42.010 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:42.268 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:42.269 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:42.269 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:42.526 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:42.526 06:39:55 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:13:42.526 06:39:55 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:42.526 06:39:55 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:42.526 06:39:55 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:13:42.526 ************************************ 00:13:42.526 START TEST nvme_flexible_data_placement 00:13:42.526 ************************************ 00:13:42.526 06:39:55 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:13:42.783 Initializing NVMe Controllers 00:13:42.783 Attaching to 0000:00:13.0 00:13:42.783 Controller supports FDP Attached to 0000:00:13.0 00:13:42.783 Namespace ID: 1 Endurance Group ID: 1 00:13:42.783 Initialization complete. 00:13:42.783 00:13:42.783 ================================== 00:13:42.783 == FDP tests for Namespace: #01 == 00:13:42.783 ================================== 00:13:42.783 00:13:42.783 Get Feature: FDP: 00:13:42.783 ================= 00:13:42.783 Enabled: Yes 00:13:42.783 FDP configuration Index: 0 00:13:42.783 00:13:42.783 FDP configurations log page 00:13:42.783 =========================== 00:13:42.783 Number of FDP configurations: 1 00:13:42.783 Version: 0 00:13:42.783 Size: 112 00:13:42.783 FDP Configuration Descriptor: 0 00:13:42.783 Descriptor Size: 96 00:13:42.783 Reclaim Group Identifier format: 2 00:13:42.783 FDP Volatile Write Cache: Not Present 00:13:42.783 FDP Configuration: Valid 00:13:42.783 Vendor Specific Size: 0 00:13:42.783 Number of Reclaim Groups: 2 00:13:42.783 Number of Recalim Unit Handles: 8 00:13:42.783 Max Placement Identifiers: 128 00:13:42.783 Number of Namespaces Suppprted: 256 00:13:42.783 Reclaim unit Nominal Size: 6000000 bytes 00:13:42.783 Estimated Reclaim Unit Time Limit: Not Reported 00:13:42.783 RUH Desc #000: RUH Type: Initially Isolated 00:13:42.783 RUH Desc #001: RUH Type: Initially Isolated 00:13:42.783 RUH Desc #002: RUH Type: Initially Isolated 00:13:42.783 RUH Desc #003: RUH Type: Initially Isolated 00:13:42.783 RUH Desc #004: RUH Type: Initially Isolated 00:13:42.783 RUH Desc #005: RUH Type: Initially Isolated 00:13:42.783 RUH Desc #006: RUH Type: Initially Isolated 00:13:42.783 RUH Desc #007: RUH Type: Initially Isolated 00:13:42.783 00:13:42.783 FDP reclaim unit handle usage log page 00:13:42.783 ====================================== 00:13:42.783 Number of Reclaim Unit Handles: 8 00:13:42.783 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:13:42.783 RUH Usage Desc #001: RUH Attributes: Unused 00:13:42.783 RUH Usage Desc #002: RUH Attributes: Unused 00:13:42.783 RUH Usage Desc #003: RUH Attributes: Unused 00:13:42.783 RUH Usage Desc #004: RUH Attributes: Unused 00:13:42.783 RUH Usage Desc #005: RUH Attributes: Unused 00:13:42.783 RUH Usage Desc #006: RUH Attributes: Unused 00:13:42.783 RUH Usage Desc #007: RUH Attributes: Unused 00:13:42.783 00:13:42.784 FDP statistics log page 00:13:42.784 ======================= 00:13:42.784 Host bytes with metadata written: 911667200 00:13:42.784 Media bytes with metadata written: 911769600 00:13:42.784 Media bytes erased: 0 00:13:42.784 00:13:42.784 FDP Reclaim unit handle status 00:13:42.784 ============================== 00:13:42.784 Number of RUHS descriptors: 2 00:13:42.784 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000005a91 00:13:42.784 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:13:42.784 00:13:42.784 FDP write on placement id: 0 success 00:13:42.784 00:13:42.784 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:13:42.784 00:13:42.784 IO mgmt send: RUH update for Placement ID: #0 Success 00:13:42.784 00:13:42.784 Get Feature: FDP Events for Placement handle: #0 00:13:42.784 ======================== 00:13:42.784 Number of FDP Events: 6 00:13:42.784 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:13:42.784 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:13:42.784 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:13:42.784 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:13:42.784 FDP Event: #4 Type: Media Reallocated Enabled: No 00:13:42.784 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:13:42.784 00:13:42.784 FDP events log page 00:13:42.784 =================== 00:13:42.784 Number of FDP events: 1 00:13:42.784 FDP Event #0: 00:13:42.784 Event Type: RU Not Written to Capacity 00:13:42.784 Placement Identifier: Valid 00:13:42.784 NSID: Valid 00:13:42.784 Location: Valid 00:13:42.784 Placement Identifier: 0 00:13:42.784 Event Timestamp: 6 00:13:42.784 Namespace Identifier: 1 00:13:42.784 Reclaim Group Identifier: 0 00:13:42.784 Reclaim Unit Handle Identifier: 0 00:13:42.784 00:13:42.784 FDP test passed 00:13:42.784 00:13:42.784 real 0m0.237s 00:13:42.784 user 0m0.077s 00:13:42.784 sys 0m0.058s 00:13:42.784 06:39:55 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:42.784 ************************************ 00:13:42.784 END TEST nvme_flexible_data_placement 00:13:42.784 ************************************ 00:13:42.784 06:39:55 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:13:42.784 ************************************ 00:13:42.784 END TEST nvme_fdp 00:13:42.784 ************************************ 00:13:42.784 00:13:42.784 real 0m7.504s 00:13:42.784 user 0m1.057s 00:13:42.784 sys 0m1.397s 00:13:42.784 06:39:55 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:42.784 06:39:55 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:13:42.784 06:39:55 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:13:42.784 06:39:55 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:13:42.784 06:39:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:42.784 06:39:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:42.784 06:39:55 -- common/autotest_common.sh@10 -- # set +x 00:13:42.784 ************************************ 00:13:42.784 START TEST nvme_rpc 00:13:42.784 ************************************ 00:13:42.784 06:39:55 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:13:43.042 * Looking for test storage... 00:13:43.042 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:43.042 06:39:55 nvme_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:43.042 06:39:55 nvme_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:43.042 06:39:55 nvme_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:13:43.042 06:39:55 nvme_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:43.042 06:39:55 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:43.042 06:39:55 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:43.042 06:39:55 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:43.042 06:39:55 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:13:43.042 06:39:55 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:13:43.042 06:39:55 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:13:43.042 06:39:55 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:13:43.042 06:39:55 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:13:43.042 06:39:55 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:13:43.042 06:39:55 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:13:43.042 06:39:55 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:43.042 06:39:55 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:13:43.042 06:39:55 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:13:43.042 06:39:55 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:43.042 06:39:55 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:43.042 06:39:55 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:13:43.042 06:39:55 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:13:43.042 06:39:55 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:43.042 06:39:55 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:13:43.042 06:39:55 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:13:43.042 06:39:55 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:13:43.042 06:39:55 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:13:43.042 06:39:55 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:43.042 06:39:55 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:13:43.042 06:39:55 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:13:43.042 06:39:55 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:43.042 06:39:55 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:43.042 06:39:55 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:13:43.042 06:39:55 nvme_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:43.042 06:39:55 nvme_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:43.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.042 --rc genhtml_branch_coverage=1 00:13:43.042 --rc genhtml_function_coverage=1 00:13:43.042 --rc genhtml_legend=1 00:13:43.042 --rc geninfo_all_blocks=1 00:13:43.042 --rc geninfo_unexecuted_blocks=1 00:13:43.042 00:13:43.042 ' 00:13:43.042 06:39:55 nvme_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:43.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.042 --rc genhtml_branch_coverage=1 00:13:43.042 --rc genhtml_function_coverage=1 00:13:43.042 --rc genhtml_legend=1 00:13:43.042 --rc geninfo_all_blocks=1 00:13:43.042 --rc geninfo_unexecuted_blocks=1 00:13:43.042 00:13:43.042 ' 00:13:43.042 06:39:55 nvme_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:43.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.042 --rc genhtml_branch_coverage=1 00:13:43.042 --rc genhtml_function_coverage=1 00:13:43.042 --rc genhtml_legend=1 00:13:43.042 --rc geninfo_all_blocks=1 00:13:43.042 --rc geninfo_unexecuted_blocks=1 00:13:43.042 00:13:43.042 ' 00:13:43.042 06:39:55 nvme_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:43.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.042 --rc genhtml_branch_coverage=1 00:13:43.042 --rc genhtml_function_coverage=1 00:13:43.042 --rc genhtml_legend=1 00:13:43.042 --rc geninfo_all_blocks=1 00:13:43.042 --rc geninfo_unexecuted_blocks=1 00:13:43.042 00:13:43.042 ' 00:13:43.042 06:39:55 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:43.042 06:39:55 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:13:43.042 06:39:55 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:13:43.042 06:39:55 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:13:43.042 06:39:55 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:13:43.042 06:39:55 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:13:43.042 06:39:55 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:13:43.042 06:39:55 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:13:43.042 06:39:55 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:13:43.042 06:39:55 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:43.042 06:39:55 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:13:43.042 06:39:55 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:13:43.042 06:39:55 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:13:43.042 06:39:55 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:13:43.042 06:39:55 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:13:43.042 06:39:55 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=66002 00:13:43.042 06:39:55 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:13:43.042 06:39:55 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:13:43.042 06:39:55 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 66002 00:13:43.042 06:39:55 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 66002 ']' 00:13:43.042 06:39:55 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:43.042 06:39:55 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:43.042 06:39:55 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.042 06:39:55 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:43.042 06:39:55 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:43.042 [2024-12-06 06:39:55.743544] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:13:43.043 [2024-12-06 06:39:55.743667] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66002 ] 00:13:43.300 [2024-12-06 06:39:55.897548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:43.300 [2024-12-06 06:39:56.002606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:43.300 [2024-12-06 06:39:56.002847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:43.868 06:39:56 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:43.868 06:39:56 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:43.868 06:39:56 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:13:44.128 Nvme0n1 00:13:44.128 06:39:56 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:13:44.128 06:39:56 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:13:44.392 request: 00:13:44.392 { 00:13:44.392 "bdev_name": "Nvme0n1", 00:13:44.392 "filename": "non_existing_file", 00:13:44.392 "method": "bdev_nvme_apply_firmware", 00:13:44.392 "req_id": 1 00:13:44.392 } 00:13:44.392 Got JSON-RPC error response 00:13:44.392 response: 00:13:44.392 { 00:13:44.392 "code": -32603, 00:13:44.392 "message": "open file failed." 00:13:44.392 } 00:13:44.392 06:39:57 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:13:44.392 06:39:57 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:13:44.392 06:39:57 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:13:44.652 06:39:57 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:44.652 06:39:57 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 66002 00:13:44.652 06:39:57 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 66002 ']' 00:13:44.652 06:39:57 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 66002 00:13:44.652 06:39:57 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:44.652 06:39:57 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:44.652 06:39:57 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66002 00:13:44.652 06:39:57 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:44.652 06:39:57 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:44.652 killing process with pid 66002 00:13:44.652 06:39:57 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66002' 00:13:44.652 06:39:57 nvme_rpc -- common/autotest_common.sh@973 -- # kill 66002 00:13:44.652 06:39:57 nvme_rpc -- common/autotest_common.sh@978 -- # wait 66002 00:13:46.555 00:13:46.555 real 0m3.423s 00:13:46.555 user 0m6.571s 00:13:46.555 sys 0m0.476s 00:13:46.555 06:39:58 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:46.555 ************************************ 00:13:46.555 END TEST nvme_rpc 00:13:46.555 ************************************ 00:13:46.555 06:39:58 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:46.555 06:39:58 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:13:46.555 06:39:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:46.555 06:39:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:46.555 06:39:58 -- common/autotest_common.sh@10 -- # set +x 00:13:46.555 ************************************ 00:13:46.555 START TEST nvme_rpc_timeouts 00:13:46.555 ************************************ 00:13:46.555 06:39:58 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:13:46.555 * Looking for test storage... 00:13:46.555 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:46.555 06:39:59 nvme_rpc_timeouts -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:46.555 06:39:59 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:46.555 06:39:59 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lcov --version 00:13:46.555 06:39:59 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:46.555 06:39:59 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:46.555 06:39:59 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:46.555 06:39:59 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:46.556 06:39:59 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:13:46.556 06:39:59 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:13:46.556 06:39:59 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:13:46.556 06:39:59 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:13:46.556 06:39:59 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:13:46.556 06:39:59 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:13:46.556 06:39:59 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:13:46.556 06:39:59 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:46.556 06:39:59 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:13:46.556 06:39:59 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:13:46.556 06:39:59 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:46.556 06:39:59 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:46.556 06:39:59 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:13:46.556 06:39:59 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:13:46.556 06:39:59 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:46.556 06:39:59 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:13:46.556 06:39:59 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:13:46.556 06:39:59 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:13:46.556 06:39:59 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:13:46.556 06:39:59 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:46.556 06:39:59 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:13:46.556 06:39:59 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:13:46.556 06:39:59 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:46.556 06:39:59 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:46.556 06:39:59 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:13:46.556 06:39:59 nvme_rpc_timeouts -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:46.556 06:39:59 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:46.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:46.556 --rc genhtml_branch_coverage=1 00:13:46.556 --rc genhtml_function_coverage=1 00:13:46.556 --rc genhtml_legend=1 00:13:46.556 --rc geninfo_all_blocks=1 00:13:46.556 --rc geninfo_unexecuted_blocks=1 00:13:46.556 00:13:46.556 ' 00:13:46.556 06:39:59 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:46.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:46.556 --rc genhtml_branch_coverage=1 00:13:46.556 --rc genhtml_function_coverage=1 00:13:46.556 --rc genhtml_legend=1 00:13:46.556 --rc geninfo_all_blocks=1 00:13:46.556 --rc geninfo_unexecuted_blocks=1 00:13:46.556 00:13:46.556 ' 00:13:46.556 06:39:59 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:46.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:46.556 --rc genhtml_branch_coverage=1 00:13:46.556 --rc genhtml_function_coverage=1 00:13:46.556 --rc genhtml_legend=1 00:13:46.556 --rc geninfo_all_blocks=1 00:13:46.556 --rc geninfo_unexecuted_blocks=1 00:13:46.556 00:13:46.556 ' 00:13:46.556 06:39:59 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:46.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:46.556 --rc genhtml_branch_coverage=1 00:13:46.556 --rc genhtml_function_coverage=1 00:13:46.556 --rc genhtml_legend=1 00:13:46.556 --rc geninfo_all_blocks=1 00:13:46.556 --rc geninfo_unexecuted_blocks=1 00:13:46.556 00:13:46.556 ' 00:13:46.556 06:39:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:46.556 06:39:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_66067 00:13:46.556 06:39:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_66067 00:13:46.556 06:39:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=66099 00:13:46.556 06:39:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:13:46.556 06:39:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 66099 00:13:46.556 06:39:59 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 66099 ']' 00:13:46.556 06:39:59 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:46.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:46.556 06:39:59 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:46.556 06:39:59 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:46.556 06:39:59 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:46.556 06:39:59 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:13:46.556 06:39:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:13:46.556 [2024-12-06 06:39:59.172057] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:13:46.556 [2024-12-06 06:39:59.172172] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66099 ] 00:13:46.814 [2024-12-06 06:39:59.334726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:46.814 [2024-12-06 06:39:59.437801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:46.814 [2024-12-06 06:39:59.437964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.381 Checking default timeout settings: 00:13:47.381 06:40:00 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:47.381 06:40:00 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:13:47.381 06:40:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:13:47.381 06:40:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:13:47.639 Making settings changes with rpc: 00:13:47.639 06:40:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:13:47.639 06:40:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:13:47.897 Check default vs. modified settings: 00:13:47.897 06:40:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:13:47.897 06:40:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:13:48.463 06:40:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:13:48.463 06:40:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:13:48.463 06:40:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_66067 00:13:48.463 06:40:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:13:48.463 06:40:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:48.463 06:40:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:13:48.463 06:40:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_66067 00:13:48.463 06:40:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:13:48.463 06:40:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:48.463 06:40:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:13:48.463 06:40:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:13:48.463 Setting action_on_timeout is changed as expected. 00:13:48.463 06:40:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:13:48.463 06:40:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:13:48.463 06:40:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:13:48.463 06:40:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_66067 00:13:48.463 06:40:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:48.463 06:40:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:13:48.463 06:40:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_66067 00:13:48.463 06:40:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:13:48.463 06:40:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:48.463 06:40:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:13:48.463 06:40:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:13:48.463 Setting timeout_us is changed as expected. 00:13:48.463 06:40:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:13:48.463 06:40:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:13:48.463 06:40:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_66067 00:13:48.463 06:40:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:48.463 06:40:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:13:48.463 06:40:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:13:48.463 06:40:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_66067 00:13:48.463 06:40:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:13:48.463 06:40:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:48.463 06:40:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:13:48.463 06:40:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:13:48.463 Setting timeout_admin_us is changed as expected. 00:13:48.463 06:40:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:13:48.463 06:40:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:13:48.463 06:40:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_66067 /tmp/settings_modified_66067 00:13:48.463 06:40:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 66099 00:13:48.463 06:40:00 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 66099 ']' 00:13:48.463 06:40:00 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 66099 00:13:48.463 06:40:00 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:13:48.463 06:40:00 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:48.463 06:40:00 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66099 00:13:48.463 06:40:00 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:48.463 06:40:00 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:48.463 killing process with pid 66099 00:13:48.463 06:40:00 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66099' 00:13:48.463 06:40:00 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 66099 00:13:48.463 06:40:00 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 66099 00:13:49.842 RPC TIMEOUT SETTING TEST PASSED. 00:13:49.842 06:40:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:13:49.842 00:13:49.842 real 0m3.568s 00:13:49.842 user 0m6.935s 00:13:49.842 sys 0m0.502s 00:13:49.842 06:40:02 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:49.842 06:40:02 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:13:49.842 ************************************ 00:13:49.842 END TEST nvme_rpc_timeouts 00:13:49.842 ************************************ 00:13:49.842 06:40:02 -- spdk/autotest.sh@239 -- # uname -s 00:13:49.842 06:40:02 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:13:49.842 06:40:02 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:13:49.842 06:40:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:49.842 06:40:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:49.842 06:40:02 -- common/autotest_common.sh@10 -- # set +x 00:13:50.100 ************************************ 00:13:50.100 START TEST sw_hotplug 00:13:50.100 ************************************ 00:13:50.100 06:40:02 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:13:50.100 * Looking for test storage... 00:13:50.100 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:50.100 06:40:02 sw_hotplug -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:50.100 06:40:02 sw_hotplug -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:50.100 06:40:02 sw_hotplug -- common/autotest_common.sh@1711 -- # lcov --version 00:13:50.100 06:40:02 sw_hotplug -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:50.100 06:40:02 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:50.100 06:40:02 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:50.100 06:40:02 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:50.100 06:40:02 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:13:50.100 06:40:02 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:13:50.101 06:40:02 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:13:50.101 06:40:02 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:13:50.101 06:40:02 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:13:50.101 06:40:02 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:13:50.101 06:40:02 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:13:50.101 06:40:02 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:50.101 06:40:02 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:13:50.101 06:40:02 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:13:50.101 06:40:02 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:50.101 06:40:02 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:50.101 06:40:02 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:13:50.101 06:40:02 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:13:50.101 06:40:02 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:50.101 06:40:02 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:13:50.101 06:40:02 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:13:50.101 06:40:02 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:13:50.101 06:40:02 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:13:50.101 06:40:02 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:50.101 06:40:02 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:13:50.101 06:40:02 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:13:50.101 06:40:02 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:50.101 06:40:02 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:50.101 06:40:02 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:13:50.101 06:40:02 sw_hotplug -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:50.101 06:40:02 sw_hotplug -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:50.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:50.101 --rc genhtml_branch_coverage=1 00:13:50.101 --rc genhtml_function_coverage=1 00:13:50.101 --rc genhtml_legend=1 00:13:50.101 --rc geninfo_all_blocks=1 00:13:50.101 --rc geninfo_unexecuted_blocks=1 00:13:50.101 00:13:50.101 ' 00:13:50.101 06:40:02 sw_hotplug -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:50.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:50.101 --rc genhtml_branch_coverage=1 00:13:50.101 --rc genhtml_function_coverage=1 00:13:50.101 --rc genhtml_legend=1 00:13:50.101 --rc geninfo_all_blocks=1 00:13:50.101 --rc geninfo_unexecuted_blocks=1 00:13:50.101 00:13:50.101 ' 00:13:50.101 06:40:02 sw_hotplug -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:50.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:50.101 --rc genhtml_branch_coverage=1 00:13:50.101 --rc genhtml_function_coverage=1 00:13:50.101 --rc genhtml_legend=1 00:13:50.101 --rc geninfo_all_blocks=1 00:13:50.101 --rc geninfo_unexecuted_blocks=1 00:13:50.101 00:13:50.101 ' 00:13:50.101 06:40:02 sw_hotplug -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:50.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:50.101 --rc genhtml_branch_coverage=1 00:13:50.101 --rc genhtml_function_coverage=1 00:13:50.101 --rc genhtml_legend=1 00:13:50.101 --rc geninfo_all_blocks=1 00:13:50.101 --rc geninfo_unexecuted_blocks=1 00:13:50.101 00:13:50.101 ' 00:13:50.101 06:40:02 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:50.358 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:50.616 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:50.616 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:50.616 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:50.616 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:50.616 06:40:03 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:13:50.616 06:40:03 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:13:50.616 06:40:03 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:13:50.616 06:40:03 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:13:50.616 06:40:03 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:13:50.616 06:40:03 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:13:50.616 06:40:03 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:13:50.616 06:40:03 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:13:50.616 06:40:03 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:13:50.616 06:40:03 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:13:50.616 06:40:03 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:13:50.616 06:40:03 sw_hotplug -- scripts/common.sh@233 -- # local class 00:13:50.616 06:40:03 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:13:50.616 06:40:03 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:13:50.616 06:40:03 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:13:50.616 06:40:03 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:13:50.616 06:40:03 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:13:50.616 06:40:03 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:13:50.616 06:40:03 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:13:50.616 06:40:03 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:13:50.616 06:40:03 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:13:50.616 06:40:03 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:13:50.616 06:40:03 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:13:50.616 06:40:03 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:13:50.616 06:40:03 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:13:50.616 06:40:03 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:13:50.616 06:40:03 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:13:50.616 06:40:03 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:13:50.616 06:40:03 sw_hotplug -- scripts/common.sh@18 -- # local i 00:13:50.616 06:40:03 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:13:50.616 06:40:03 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:50.616 06:40:03 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:13:50.616 06:40:03 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:13:50.616 06:40:03 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:13:50.616 06:40:03 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:13:50.616 06:40:03 sw_hotplug -- scripts/common.sh@18 -- # local i 00:13:50.616 06:40:03 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:13:50.616 06:40:03 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:50.616 06:40:03 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:13:50.616 06:40:03 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:13:50.616 06:40:03 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:13:50.616 06:40:03 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:13:50.616 06:40:03 sw_hotplug -- scripts/common.sh@18 -- # local i 00:13:50.616 06:40:03 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:13:50.616 06:40:03 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:50.616 06:40:03 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:13:50.616 06:40:03 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:13:50.616 06:40:03 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:13:50.616 06:40:03 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:13:50.616 06:40:03 sw_hotplug -- scripts/common.sh@18 -- # local i 00:13:50.616 06:40:03 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:13:50.616 06:40:03 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:50.616 06:40:03 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:13:50.616 06:40:03 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:13:50.616 06:40:03 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:13:50.616 06:40:03 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:13:50.616 06:40:03 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:13:50.616 06:40:03 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:13:50.616 06:40:03 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:13:50.616 06:40:03 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:13:50.616 06:40:03 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:13:50.616 06:40:03 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:13:50.616 06:40:03 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:13:50.616 06:40:03 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:13:50.616 06:40:03 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:13:50.616 06:40:03 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:13:50.616 06:40:03 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:13:50.616 06:40:03 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:13:50.616 06:40:03 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:13:50.616 06:40:03 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:13:50.616 06:40:03 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:13:50.616 06:40:03 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:13:50.616 06:40:03 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:13:50.616 06:40:03 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:13:50.616 06:40:03 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:13:50.616 06:40:03 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:13:50.616 06:40:03 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:13:50.616 06:40:03 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:13:50.616 06:40:03 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:50.873 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:51.131 Waiting for block devices as requested 00:13:51.131 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:51.131 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:51.388 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:51.388 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:56.660 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:56.660 06:40:09 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:13:56.660 06:40:09 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:56.918 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:13:56.919 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:56.919 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:13:57.177 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:13:57.435 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:57.435 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:57.435 06:40:10 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:13:57.435 06:40:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:57.435 06:40:10 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:13:57.435 06:40:10 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:13:57.435 06:40:10 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=66960 00:13:57.435 06:40:10 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:13:57.435 06:40:10 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:13:57.435 06:40:10 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:13:57.435 06:40:10 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:13:57.435 06:40:10 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:13:57.435 06:40:10 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:13:57.435 06:40:10 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:13:57.435 06:40:10 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:13:57.435 06:40:10 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:13:57.435 06:40:10 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:13:57.435 06:40:10 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:13:57.435 06:40:10 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:13:57.435 06:40:10 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:13:57.435 06:40:10 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:13:57.692 Initializing NVMe Controllers 00:13:57.692 Attaching to 0000:00:10.0 00:13:57.692 Attaching to 0000:00:11.0 00:13:57.692 Attached to 0000:00:10.0 00:13:57.692 Attached to 0000:00:11.0 00:13:57.693 Initialization complete. Starting I/O... 00:13:57.693 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:13:57.693 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:13:57.693 00:13:59.063 QEMU NVMe Ctrl (12340 ): 2253 I/Os completed (+2253) 00:13:59.063 QEMU NVMe Ctrl (12341 ): 2349 I/Os completed (+2349) 00:13:59.063 00:13:59.996 QEMU NVMe Ctrl (12340 ): 5341 I/Os completed (+3088) 00:13:59.996 QEMU NVMe Ctrl (12341 ): 5417 I/Os completed (+3068) 00:13:59.996 00:14:00.930 QEMU NVMe Ctrl (12340 ): 8533 I/Os completed (+3192) 00:14:00.930 QEMU NVMe Ctrl (12341 ): 8609 I/Os completed (+3192) 00:14:00.930 00:14:01.894 QEMU NVMe Ctrl (12340 ): 11308 I/Os completed (+2775) 00:14:01.894 QEMU NVMe Ctrl (12341 ): 11407 I/Os completed (+2798) 00:14:01.894 00:14:02.831 QEMU NVMe Ctrl (12340 ): 14372 I/Os completed (+3064) 00:14:02.831 QEMU NVMe Ctrl (12341 ): 14470 I/Os completed (+3063) 00:14:02.831 00:14:03.765 06:40:16 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:03.765 06:40:16 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:03.765 06:40:16 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:03.765 [2024-12-06 06:40:16.173887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:03.765 Controller removed: QEMU NVMe Ctrl (12340 ) 00:14:03.765 [2024-12-06 06:40:16.175393] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:03.765 [2024-12-06 06:40:16.175483] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:03.765 [2024-12-06 06:40:16.175517] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:03.765 [2024-12-06 06:40:16.175543] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:03.765 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:14:03.765 [2024-12-06 06:40:16.177513] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:03.765 [2024-12-06 06:40:16.177568] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:03.765 [2024-12-06 06:40:16.177591] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:03.765 [2024-12-06 06:40:16.177614] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:03.765 06:40:16 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:03.765 06:40:16 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:03.765 [2024-12-06 06:40:16.197671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:03.765 Controller removed: QEMU NVMe Ctrl (12341 ) 00:14:03.765 [2024-12-06 06:40:16.199096] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:03.765 [2024-12-06 06:40:16.199153] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:03.765 [2024-12-06 06:40:16.199186] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:03.765 [2024-12-06 06:40:16.199212] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:03.765 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:14:03.765 [2024-12-06 06:40:16.201301] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:03.765 [2024-12-06 06:40:16.201350] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:03.765 [2024-12-06 06:40:16.201381] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:03.765 [2024-12-06 06:40:16.201412] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:03.765 06:40:16 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:14:03.765 06:40:16 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:03.765 06:40:16 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:03.765 06:40:16 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:03.765 06:40:16 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:03.765 00:14:03.765 06:40:16 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:03.765 06:40:16 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:03.765 06:40:16 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:03.765 06:40:16 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:03.765 06:40:16 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:03.765 Attaching to 0000:00:10.0 00:14:03.765 Attached to 0000:00:10.0 00:14:03.765 06:40:16 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:03.765 06:40:16 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:03.765 06:40:16 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:03.765 Attaching to 0000:00:11.0 00:14:03.765 Attached to 0000:00:11.0 00:14:04.700 QEMU NVMe Ctrl (12340 ): 2868 I/Os completed (+2868) 00:14:04.700 QEMU NVMe Ctrl (12341 ): 2638 I/Os completed (+2638) 00:14:04.700 00:14:06.187 QEMU NVMe Ctrl (12340 ): 5587 I/Os completed (+2719) 00:14:06.187 QEMU NVMe Ctrl (12341 ): 5433 I/Os completed (+2795) 00:14:06.187 00:14:06.757 QEMU NVMe Ctrl (12340 ): 8799 I/Os completed (+3212) 00:14:06.757 QEMU NVMe Ctrl (12341 ): 8645 I/Os completed (+3212) 00:14:06.757 00:14:07.685 QEMU NVMe Ctrl (12340 ): 11753 I/Os completed (+2954) 00:14:07.685 QEMU NVMe Ctrl (12341 ): 11618 I/Os completed (+2973) 00:14:07.685 00:14:09.058 QEMU NVMe Ctrl (12340 ): 14742 I/Os completed (+2989) 00:14:09.058 QEMU NVMe Ctrl (12341 ): 14609 I/Os completed (+2991) 00:14:09.058 00:14:09.990 QEMU NVMe Ctrl (12340 ): 17738 I/Os completed (+2996) 00:14:09.990 QEMU NVMe Ctrl (12341 ): 17613 I/Os completed (+3004) 00:14:09.990 00:14:10.920 QEMU NVMe Ctrl (12340 ): 20789 I/Os completed (+3051) 00:14:10.920 QEMU NVMe Ctrl (12341 ): 20660 I/Os completed (+3047) 00:14:10.920 00:14:11.866 QEMU NVMe Ctrl (12340 ): 23847 I/Os completed (+3058) 00:14:11.866 QEMU NVMe Ctrl (12341 ): 23704 I/Os completed (+3044) 00:14:11.866 00:14:12.797 QEMU NVMe Ctrl (12340 ): 27042 I/Os completed (+3195) 00:14:12.797 QEMU NVMe Ctrl (12341 ): 26890 I/Os completed (+3186) 00:14:12.797 00:14:13.727 QEMU NVMe Ctrl (12340 ): 30237 I/Os completed (+3195) 00:14:13.727 QEMU NVMe Ctrl (12341 ): 30082 I/Os completed (+3192) 00:14:13.727 00:14:14.661 QEMU NVMe Ctrl (12340 ): 33481 I/Os completed (+3244) 00:14:14.661 QEMU NVMe Ctrl (12341 ): 33326 I/Os completed (+3244) 00:14:14.661 00:14:16.031 QEMU NVMe Ctrl (12340 ): 36717 I/Os completed (+3236) 00:14:16.031 QEMU NVMe Ctrl (12341 ): 36562 I/Os completed (+3236) 00:14:16.031 00:14:16.031 06:40:28 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:14:16.031 06:40:28 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:16.031 06:40:28 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:16.031 06:40:28 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:16.031 [2024-12-06 06:40:28.491572] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:16.031 Controller removed: QEMU NVMe Ctrl (12340 ) 00:14:16.031 [2024-12-06 06:40:28.492785] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:16.031 [2024-12-06 06:40:28.492835] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:16.031 [2024-12-06 06:40:28.492854] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:16.031 [2024-12-06 06:40:28.492871] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:16.031 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:14:16.031 [2024-12-06 06:40:28.494662] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:16.031 [2024-12-06 06:40:28.494709] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:16.032 [2024-12-06 06:40:28.494723] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:16.032 [2024-12-06 06:40:28.494737] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:16.032 06:40:28 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:16.032 06:40:28 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:16.032 [2024-12-06 06:40:28.513216] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:16.032 Controller removed: QEMU NVMe Ctrl (12341 ) 00:14:16.032 [2024-12-06 06:40:28.514293] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:16.032 [2024-12-06 06:40:28.514336] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:16.032 [2024-12-06 06:40:28.514356] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:16.032 [2024-12-06 06:40:28.514371] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:16.032 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:14:16.032 [2024-12-06 06:40:28.516040] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:16.032 [2024-12-06 06:40:28.516077] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:16.032 [2024-12-06 06:40:28.516092] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:16.032 [2024-12-06 06:40:28.516106] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:16.032 06:40:28 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:14:16.032 06:40:28 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:16.032 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:14:16.032 EAL: Scan for (pci) bus failed. 00:14:16.032 06:40:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:16.032 06:40:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:16.032 06:40:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:16.032 06:40:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:16.032 06:40:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:16.032 06:40:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:16.032 06:40:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:16.032 06:40:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:16.032 Attaching to 0000:00:10.0 00:14:16.032 Attached to 0000:00:10.0 00:14:16.289 06:40:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:16.289 06:40:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:16.289 06:40:28 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:16.289 Attaching to 0000:00:11.0 00:14:16.289 Attached to 0000:00:11.0 00:14:16.853 QEMU NVMe Ctrl (12340 ): 1933 I/Os completed (+1933) 00:14:16.853 QEMU NVMe Ctrl (12341 ): 1696 I/Os completed (+1696) 00:14:16.853 00:14:17.836 QEMU NVMe Ctrl (12340 ): 5117 I/Os completed (+3184) 00:14:17.836 QEMU NVMe Ctrl (12341 ): 4880 I/Os completed (+3184) 00:14:17.836 00:14:18.789 QEMU NVMe Ctrl (12340 ): 8240 I/Os completed (+3123) 00:14:18.789 QEMU NVMe Ctrl (12341 ): 7992 I/Os completed (+3112) 00:14:18.789 00:14:19.722 QEMU NVMe Ctrl (12340 ): 11304 I/Os completed (+3064) 00:14:19.722 QEMU NVMe Ctrl (12341 ): 11059 I/Os completed (+3067) 00:14:19.722 00:14:20.654 QEMU NVMe Ctrl (12340 ): 14504 I/Os completed (+3200) 00:14:20.654 QEMU NVMe Ctrl (12341 ): 14259 I/Os completed (+3200) 00:14:20.654 00:14:22.024 QEMU NVMe Ctrl (12340 ): 17364 I/Os completed (+2860) 00:14:22.024 QEMU NVMe Ctrl (12341 ): 17153 I/Os completed (+2894) 00:14:22.024 00:14:22.956 QEMU NVMe Ctrl (12340 ): 20370 I/Os completed (+3006) 00:14:22.956 QEMU NVMe Ctrl (12341 ): 20157 I/Os completed (+3004) 00:14:22.956 00:14:23.888 QEMU NVMe Ctrl (12340 ): 23348 I/Os completed (+2978) 00:14:23.889 QEMU NVMe Ctrl (12341 ): 23169 I/Os completed (+3012) 00:14:23.889 00:14:24.820 QEMU NVMe Ctrl (12340 ): 26148 I/Os completed (+2800) 00:14:24.820 QEMU NVMe Ctrl (12341 ): 25969 I/Os completed (+2800) 00:14:24.820 00:14:25.752 QEMU NVMe Ctrl (12340 ): 29280 I/Os completed (+3132) 00:14:25.752 QEMU NVMe Ctrl (12341 ): 29097 I/Os completed (+3128) 00:14:25.752 00:14:26.685 QEMU NVMe Ctrl (12340 ): 32288 I/Os completed (+3008) 00:14:26.685 QEMU NVMe Ctrl (12341 ): 32109 I/Os completed (+3012) 00:14:26.685 00:14:28.055 QEMU NVMe Ctrl (12340 ): 35222 I/Os completed (+2934) 00:14:28.055 QEMU NVMe Ctrl (12341 ): 35048 I/Os completed (+2939) 00:14:28.055 00:14:28.312 06:40:40 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:14:28.312 06:40:40 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:28.312 06:40:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:28.312 06:40:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:28.312 [2024-12-06 06:40:40.836024] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:28.312 Controller removed: QEMU NVMe Ctrl (12340 ) 00:14:28.312 [2024-12-06 06:40:40.838923] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:28.312 [2024-12-06 06:40:40.838978] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:28.312 [2024-12-06 06:40:40.838995] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:28.312 [2024-12-06 06:40:40.839013] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:28.312 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:14:28.312 [2024-12-06 06:40:40.841353] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:28.312 [2024-12-06 06:40:40.841402] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:28.312 [2024-12-06 06:40:40.841416] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:28.312 [2024-12-06 06:40:40.841431] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:28.312 06:40:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:28.312 06:40:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:28.312 [2024-12-06 06:40:40.867942] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:28.312 Controller removed: QEMU NVMe Ctrl (12341 ) 00:14:28.312 06:40:40 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:14:28.312 06:40:40 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:28.312 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:14:28.312 EAL: Scan for (pci) bus failed. 00:14:28.312 [2024-12-06 06:40:40.870673] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:28.313 [2024-12-06 06:40:40.870760] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:28.313 [2024-12-06 06:40:40.870811] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:28.313 [2024-12-06 06:40:40.870866] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:28.313 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:14:28.313 [2024-12-06 06:40:40.873512] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:28.313 [2024-12-06 06:40:40.873548] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:28.313 [2024-12-06 06:40:40.873565] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:28.313 [2024-12-06 06:40:40.873580] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:28.313 06:40:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:28.313 06:40:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:28.313 06:40:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:28.569 06:40:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:28.569 06:40:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:28.569 06:40:41 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:28.569 06:40:41 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:28.569 06:40:41 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:28.569 Attaching to 0000:00:10.0 00:14:28.569 Attached to 0000:00:10.0 00:14:28.569 06:40:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:28.569 06:40:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:28.569 06:40:41 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:28.569 Attaching to 0000:00:11.0 00:14:28.569 Attached to 0000:00:11.0 00:14:28.569 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:14:28.569 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:14:28.569 [2024-12-06 06:40:41.159607] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:14:40.775 06:40:53 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:14:40.775 06:40:53 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:40.775 06:40:53 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.97 00:14:40.775 06:40:53 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.97 00:14:40.775 06:40:53 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:14:40.775 06:40:53 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.97 00:14:40.775 06:40:53 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.97 2 00:14:40.775 remove_attach_helper took 42.97s to complete (handling 2 nvme drive(s)) 06:40:53 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:14:47.328 06:40:59 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 66960 00:14:47.328 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (66960) - No such process 00:14:47.328 06:40:59 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 66960 00:14:47.328 06:40:59 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:14:47.328 06:40:59 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:14:47.328 06:40:59 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:14:47.328 06:40:59 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=67509 00:14:47.328 06:40:59 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:14:47.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:47.328 06:40:59 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 67509 00:14:47.328 06:40:59 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 67509 ']' 00:14:47.328 06:40:59 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:47.328 06:40:59 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:47.328 06:40:59 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:47.328 06:40:59 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:47.328 06:40:59 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:47.328 06:40:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:47.328 [2024-12-06 06:40:59.230793] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:14:47.328 [2024-12-06 06:40:59.230917] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67509 ] 00:14:47.328 [2024-12-06 06:40:59.394698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.328 [2024-12-06 06:40:59.523420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:47.584 06:41:00 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:47.584 06:41:00 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:14:47.584 06:41:00 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:14:47.584 06:41:00 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.584 06:41:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:47.584 06:41:00 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.584 06:41:00 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:14:47.584 06:41:00 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:14:47.584 06:41:00 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:14:47.584 06:41:00 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:14:47.584 06:41:00 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:14:47.584 06:41:00 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:14:47.584 06:41:00 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:14:47.584 06:41:00 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:14:47.584 06:41:00 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:14:47.584 06:41:00 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:14:47.584 06:41:00 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:14:47.584 06:41:00 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:14:47.584 06:41:00 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:14:54.133 06:41:06 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:54.133 06:41:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:54.133 06:41:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:54.133 06:41:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:54.133 06:41:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:54.133 06:41:06 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:54.133 06:41:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:54.133 06:41:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:54.133 06:41:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:54.133 06:41:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:54.133 06:41:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:54.133 06:41:06 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.133 06:41:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:54.133 06:41:06 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.133 [2024-12-06 06:41:06.361163] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:54.133 06:41:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:14:54.133 06:41:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:54.134 [2024-12-06 06:41:06.362815] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:54.134 [2024-12-06 06:41:06.362858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:54.134 [2024-12-06 06:41:06.362874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:54.134 [2024-12-06 06:41:06.362896] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:54.134 [2024-12-06 06:41:06.362906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:54.134 [2024-12-06 06:41:06.362917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:54.134 [2024-12-06 06:41:06.362926] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:54.134 [2024-12-06 06:41:06.362937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:54.134 [2024-12-06 06:41:06.362945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:54.134 [2024-12-06 06:41:06.362959] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:54.134 [2024-12-06 06:41:06.362967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:54.134 [2024-12-06 06:41:06.362977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:54.134 [2024-12-06 06:41:06.861171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:54.134 [2024-12-06 06:41:06.863000] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:54.134 [2024-12-06 06:41:06.863043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:54.134 [2024-12-06 06:41:06.863058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:54.134 [2024-12-06 06:41:06.863080] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:54.134 [2024-12-06 06:41:06.863092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:54.134 [2024-12-06 06:41:06.863101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:54.134 [2024-12-06 06:41:06.863112] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:54.134 [2024-12-06 06:41:06.863121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:54.134 [2024-12-06 06:41:06.863131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:54.134 [2024-12-06 06:41:06.863140] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:54.134 [2024-12-06 06:41:06.863150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:54.134 [2024-12-06 06:41:06.863159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:54.134 06:41:06 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:14:54.134 06:41:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:54.134 06:41:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:54.134 06:41:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:54.134 06:41:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:54.134 06:41:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:54.134 06:41:06 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.134 06:41:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:54.392 06:41:06 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.392 06:41:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:54.392 06:41:06 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:54.392 06:41:06 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:54.392 06:41:06 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:54.392 06:41:06 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:54.392 06:41:07 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:54.392 06:41:07 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:54.392 06:41:07 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:54.392 06:41:07 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:54.392 06:41:07 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:54.649 06:41:07 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:54.649 06:41:07 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:54.649 06:41:07 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:06.887 06:41:19 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:06.887 06:41:19 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:06.887 06:41:19 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:06.887 06:41:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:06.887 06:41:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:06.887 06:41:19 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.887 06:41:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:06.887 06:41:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:06.887 06:41:19 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.887 06:41:19 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:06.887 06:41:19 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:06.887 06:41:19 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:06.887 06:41:19 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:06.887 06:41:19 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:06.887 06:41:19 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:06.887 06:41:19 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:06.887 06:41:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:06.887 06:41:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:06.887 06:41:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:06.887 06:41:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:06.887 06:41:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:06.887 06:41:19 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.887 06:41:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:06.887 [2024-12-06 06:41:19.261359] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:15:06.887 [2024-12-06 06:41:19.262687] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:06.887 [2024-12-06 06:41:19.262720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:06.887 [2024-12-06 06:41:19.262731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:06.887 [2024-12-06 06:41:19.262749] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:06.887 [2024-12-06 06:41:19.262756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:06.887 [2024-12-06 06:41:19.262764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:06.887 [2024-12-06 06:41:19.262772] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:06.887 [2024-12-06 06:41:19.262780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:06.887 [2024-12-06 06:41:19.262787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:06.887 [2024-12-06 06:41:19.262795] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:06.887 [2024-12-06 06:41:19.262802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:06.887 [2024-12-06 06:41:19.262810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:06.887 06:41:19 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.887 06:41:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:15:06.887 06:41:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:07.156 [2024-12-06 06:41:19.661371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:15:07.156 [2024-12-06 06:41:19.662751] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:07.156 [2024-12-06 06:41:19.662783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:07.156 [2024-12-06 06:41:19.662797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.156 [2024-12-06 06:41:19.662814] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:07.156 [2024-12-06 06:41:19.662824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:07.156 [2024-12-06 06:41:19.662832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.156 [2024-12-06 06:41:19.662841] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:07.156 [2024-12-06 06:41:19.662848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:07.156 [2024-12-06 06:41:19.662856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.156 [2024-12-06 06:41:19.662863] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:07.156 [2024-12-06 06:41:19.662871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:07.156 [2024-12-06 06:41:19.662877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.156 06:41:19 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:15:07.156 06:41:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:07.156 06:41:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:07.156 06:41:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:07.156 06:41:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:07.156 06:41:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:07.156 06:41:19 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.156 06:41:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:07.156 06:41:19 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.156 06:41:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:07.156 06:41:19 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:07.156 06:41:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:07.156 06:41:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:07.156 06:41:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:07.417 06:41:19 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:07.418 06:41:19 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:07.418 06:41:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:07.418 06:41:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:07.418 06:41:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:07.418 06:41:20 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:07.418 06:41:20 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:07.418 06:41:20 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:19.651 06:41:32 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:19.651 06:41:32 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:19.651 06:41:32 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:19.651 06:41:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:19.651 06:41:32 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:19.651 06:41:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:19.651 06:41:32 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.651 06:41:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:19.651 06:41:32 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.651 06:41:32 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:19.651 06:41:32 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:19.651 06:41:32 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:19.651 06:41:32 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:19.651 06:41:32 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:19.651 06:41:32 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:19.651 06:41:32 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:19.651 06:41:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:19.651 06:41:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:19.651 06:41:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:19.651 06:41:32 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:19.651 06:41:32 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.651 06:41:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:19.651 06:41:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:19.651 06:41:32 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.651 06:41:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:15:19.651 06:41:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:19.651 [2024-12-06 06:41:32.161563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:15:19.651 [2024-12-06 06:41:32.163005] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:19.651 [2024-12-06 06:41:32.163040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:19.651 [2024-12-06 06:41:32.163051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.651 [2024-12-06 06:41:32.163069] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:19.651 [2024-12-06 06:41:32.163077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:19.652 [2024-12-06 06:41:32.163089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.652 [2024-12-06 06:41:32.163096] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:19.652 [2024-12-06 06:41:32.163104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:19.652 [2024-12-06 06:41:32.163111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.652 [2024-12-06 06:41:32.163120] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:19.652 [2024-12-06 06:41:32.163127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:19.652 [2024-12-06 06:41:32.163135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.222 06:41:32 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:15:20.222 06:41:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:20.222 06:41:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:20.222 06:41:32 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:20.222 06:41:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:20.222 06:41:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:20.222 06:41:32 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.222 06:41:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:20.222 [2024-12-06 06:41:32.661562] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:15:20.222 [2024-12-06 06:41:32.662954] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:20.222 [2024-12-06 06:41:32.662984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:20.222 [2024-12-06 06:41:32.662996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.222 [2024-12-06 06:41:32.663014] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:20.222 [2024-12-06 06:41:32.663023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:20.222 [2024-12-06 06:41:32.663031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.222 [2024-12-06 06:41:32.663040] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:20.222 [2024-12-06 06:41:32.663047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:20.222 [2024-12-06 06:41:32.663057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.222 [2024-12-06 06:41:32.663064] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:20.222 [2024-12-06 06:41:32.663072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:20.222 [2024-12-06 06:41:32.663078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.222 06:41:32 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.222 06:41:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:15:20.222 06:41:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:20.483 06:41:33 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:15:20.483 06:41:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:20.483 06:41:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:20.483 06:41:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:20.483 06:41:33 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:20.483 06:41:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:20.483 06:41:33 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.483 06:41:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:20.483 06:41:33 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.745 06:41:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:20.745 06:41:33 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:20.745 06:41:33 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:20.745 06:41:33 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:20.745 06:41:33 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:20.745 06:41:33 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:20.745 06:41:33 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:20.745 06:41:33 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:20.745 06:41:33 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:20.745 06:41:33 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:20.745 06:41:33 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:20.745 06:41:33 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:20.745 06:41:33 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:33.053 06:41:45 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:33.053 06:41:45 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:33.053 06:41:45 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:33.053 06:41:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:33.053 06:41:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:33.053 06:41:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:33.053 06:41:45 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.053 06:41:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:33.053 06:41:45 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.053 06:41:45 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:33.053 06:41:45 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:33.053 06:41:45 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.22 00:15:33.053 06:41:45 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.22 00:15:33.053 06:41:45 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:15:33.053 06:41:45 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.22 00:15:33.053 06:41:45 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.22 2 00:15:33.053 remove_attach_helper took 45.22s to complete (handling 2 nvme drive(s)) 06:41:45 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:15:33.053 06:41:45 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.053 06:41:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:33.053 06:41:45 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.053 06:41:45 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:15:33.053 06:41:45 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.053 06:41:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:33.053 06:41:45 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.053 06:41:45 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:15:33.053 06:41:45 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:15:33.053 06:41:45 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:15:33.053 06:41:45 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:15:33.053 06:41:45 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:15:33.053 06:41:45 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:15:33.053 06:41:45 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:15:33.053 06:41:45 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:15:33.053 06:41:45 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:15:33.053 06:41:45 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:15:33.053 06:41:45 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:15:33.053 06:41:45 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:15:33.053 06:41:45 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:15:39.634 06:41:51 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:39.634 06:41:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:39.634 06:41:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:39.634 06:41:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:39.634 06:41:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:39.634 06:41:51 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:39.634 06:41:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:39.634 06:41:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:39.634 06:41:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:39.634 06:41:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:39.634 06:41:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:39.634 06:41:51 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.634 06:41:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:39.634 06:41:51 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.634 06:41:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:15:39.634 06:41:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:39.634 [2024-12-06 06:41:51.613334] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:15:39.634 [2024-12-06 06:41:51.614587] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:39.634 [2024-12-06 06:41:51.614618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.634 [2024-12-06 06:41:51.614630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.634 [2024-12-06 06:41:51.614647] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:39.634 [2024-12-06 06:41:51.614655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.634 [2024-12-06 06:41:51.614663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.634 [2024-12-06 06:41:51.614671] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:39.634 [2024-12-06 06:41:51.614680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.634 [2024-12-06 06:41:51.614687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.634 [2024-12-06 06:41:51.614696] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:39.634 [2024-12-06 06:41:51.614702] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.634 [2024-12-06 06:41:51.614712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.634 [2024-12-06 06:41:52.013341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:15:39.634 [2024-12-06 06:41:52.014703] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:39.634 [2024-12-06 06:41:52.014730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.634 [2024-12-06 06:41:52.014743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.634 [2024-12-06 06:41:52.014759] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:39.634 [2024-12-06 06:41:52.014768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.634 [2024-12-06 06:41:52.014775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.634 [2024-12-06 06:41:52.014784] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:39.634 [2024-12-06 06:41:52.014790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.634 [2024-12-06 06:41:52.014798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.634 [2024-12-06 06:41:52.014805] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:39.634 [2024-12-06 06:41:52.014813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.634 [2024-12-06 06:41:52.014820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.634 06:41:52 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:15:39.634 06:41:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:39.634 06:41:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:39.634 06:41:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:39.634 06:41:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:39.634 06:41:52 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.634 06:41:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:39.634 06:41:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:39.634 06:41:52 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.634 06:41:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:39.634 06:41:52 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:39.634 06:41:52 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:39.634 06:41:52 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:39.634 06:41:52 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:39.634 06:41:52 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:39.634 06:41:52 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:39.634 06:41:52 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:39.634 06:41:52 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:39.634 06:41:52 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:39.634 06:41:52 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:39.634 06:41:52 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:39.634 06:41:52 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:51.949 06:42:04 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:51.949 06:42:04 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:51.949 06:42:04 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:51.949 06:42:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:51.949 06:42:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:51.949 06:42:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:51.949 06:42:04 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.949 06:42:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:51.949 06:42:04 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.949 06:42:04 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:51.949 06:42:04 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:51.949 06:42:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:51.949 06:42:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:51.949 [2024-12-06 06:42:04.413602] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:15:51.949 [2024-12-06 06:42:04.415122] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:51.949 [2024-12-06 06:42:04.415173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:51.949 [2024-12-06 06:42:04.415188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.949 [2024-12-06 06:42:04.415211] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:51.949 [2024-12-06 06:42:04.415220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:51.949 [2024-12-06 06:42:04.415231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.949 [2024-12-06 06:42:04.415241] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:51.949 [2024-12-06 06:42:04.415251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:51.949 [2024-12-06 06:42:04.415260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.949 [2024-12-06 06:42:04.415272] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:51.949 [2024-12-06 06:42:04.415280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:51.949 [2024-12-06 06:42:04.415291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.949 06:42:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:51.949 06:42:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:51.949 06:42:04 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:51.949 06:42:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:51.949 06:42:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:51.949 06:42:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:51.949 06:42:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:51.949 06:42:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:51.949 06:42:04 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.949 06:42:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:51.949 06:42:04 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.949 06:42:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:15:51.949 06:42:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:52.522 06:42:04 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:15:52.522 06:42:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:52.522 06:42:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:52.522 06:42:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:52.522 06:42:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:52.522 06:42:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:52.522 06:42:04 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.522 06:42:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:52.522 06:42:05 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.522 [2024-12-06 06:42:05.013629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:15:52.522 [2024-12-06 06:42:05.015215] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:52.522 [2024-12-06 06:42:05.015433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:52.522 [2024-12-06 06:42:05.015514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:52.522 [2024-12-06 06:42:05.015543] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:52.522 [2024-12-06 06:42:05.015562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:52.522 [2024-12-06 06:42:05.015572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:52.522 [2024-12-06 06:42:05.015586] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:52.522 [2024-12-06 06:42:05.015595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:52.522 [2024-12-06 06:42:05.015607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:52.522 [2024-12-06 06:42:05.015617] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:52.522 [2024-12-06 06:42:05.015629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:52.522 [2024-12-06 06:42:05.015637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:52.522 06:42:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:15:52.522 06:42:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:53.094 06:42:05 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:15:53.094 06:42:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:53.094 06:42:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:53.094 06:42:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:53.094 06:42:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:53.094 06:42:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:53.094 06:42:05 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.094 06:42:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:53.094 06:42:05 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.094 06:42:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:53.094 06:42:05 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:53.094 06:42:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:53.094 06:42:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:53.094 06:42:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:53.094 06:42:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:53.094 06:42:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:53.094 06:42:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:53.094 06:42:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:53.094 06:42:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:53.353 06:42:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:53.353 06:42:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:53.353 06:42:05 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:16:05.612 06:42:17 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:16:05.612 06:42:17 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:16:05.612 06:42:17 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:16:05.612 06:42:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:05.613 06:42:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:05.613 06:42:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:05.613 06:42:17 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.613 06:42:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:05.613 06:42:17 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.613 06:42:17 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:16:05.613 06:42:17 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:05.613 06:42:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:05.613 06:42:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:05.613 06:42:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:05.613 06:42:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:05.613 06:42:17 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:16:05.613 06:42:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:05.613 06:42:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:05.613 06:42:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:05.613 06:42:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:05.613 06:42:17 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.613 06:42:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:05.613 06:42:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:05.613 06:42:17 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.613 06:42:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:16:05.613 06:42:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:16:05.613 [2024-12-06 06:42:18.013824] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:16:05.613 [2024-12-06 06:42:18.015286] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:05.613 [2024-12-06 06:42:18.015329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:05.613 [2024-12-06 06:42:18.015341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.613 [2024-12-06 06:42:18.015360] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:05.613 [2024-12-06 06:42:18.015369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:05.613 [2024-12-06 06:42:18.015378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.613 [2024-12-06 06:42:18.015387] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:05.613 [2024-12-06 06:42:18.015398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:05.613 [2024-12-06 06:42:18.015405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.613 [2024-12-06 06:42:18.015414] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:05.613 [2024-12-06 06:42:18.015420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:05.613 [2024-12-06 06:42:18.015429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.871 06:42:18 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:16:05.871 06:42:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:05.871 06:42:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:05.871 06:42:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:05.871 06:42:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:05.871 06:42:18 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.871 06:42:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:05.871 06:42:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:05.871 06:42:18 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.871 [2024-12-06 06:42:18.513857] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:16:05.871 [2024-12-06 06:42:18.515716] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:05.871 [2024-12-06 06:42:18.515766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:05.871 [2024-12-06 06:42:18.515786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.871 [2024-12-06 06:42:18.515811] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:05.871 [2024-12-06 06:42:18.515826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:05.871 [2024-12-06 06:42:18.515839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.871 [2024-12-06 06:42:18.515854] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:05.871 [2024-12-06 06:42:18.515866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:05.871 [2024-12-06 06:42:18.515881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.871 [2024-12-06 06:42:18.515893] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:05.871 [2024-12-06 06:42:18.515910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:05.871 [2024-12-06 06:42:18.515923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.871 06:42:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:16:05.871 06:42:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:16:06.437 06:42:19 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:16:06.437 06:42:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:06.437 06:42:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:06.437 06:42:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:06.437 06:42:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:06.437 06:42:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:06.437 06:42:19 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.437 06:42:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:06.437 06:42:19 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.437 06:42:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:16:06.437 06:42:19 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:16:06.437 06:42:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:06.437 06:42:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:06.437 06:42:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:16:06.695 06:42:19 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:16:06.695 06:42:19 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:06.695 06:42:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:06.695 06:42:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:06.695 06:42:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:16:06.695 06:42:19 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:16:06.695 06:42:19 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:06.695 06:42:19 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:16:18.928 06:42:31 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:16:18.928 06:42:31 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:16:18.928 06:42:31 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:16:18.928 06:42:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:18.928 06:42:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:18.928 06:42:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:18.928 06:42:31 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.928 06:42:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:18.928 06:42:31 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.928 06:42:31 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:16:18.928 06:42:31 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:18.928 06:42:31 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.78 00:16:18.928 06:42:31 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.78 00:16:18.928 06:42:31 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:16:18.928 06:42:31 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.78 00:16:18.928 06:42:31 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.78 2 00:16:18.928 remove_attach_helper took 45.78s to complete (handling 2 nvme drive(s)) 06:42:31 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:16:18.928 06:42:31 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 67509 00:16:18.928 06:42:31 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 67509 ']' 00:16:18.928 06:42:31 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 67509 00:16:18.928 06:42:31 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:16:18.928 06:42:31 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:18.928 06:42:31 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67509 00:16:18.928 06:42:31 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:18.928 06:42:31 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:18.928 06:42:31 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67509' 00:16:18.928 killing process with pid 67509 00:16:18.928 06:42:31 sw_hotplug -- common/autotest_common.sh@973 -- # kill 67509 00:16:18.928 06:42:31 sw_hotplug -- common/autotest_common.sh@978 -- # wait 67509 00:16:19.859 06:42:32 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:20.117 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:20.682 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:20.682 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:20.682 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:20.682 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:20.682 00:16:20.682 real 2m30.758s 00:16:20.682 user 1m51.592s 00:16:20.682 sys 0m17.843s 00:16:20.682 06:42:33 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:20.682 ************************************ 00:16:20.682 END TEST sw_hotplug 00:16:20.682 06:42:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:20.682 ************************************ 00:16:20.682 06:42:33 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:16:20.682 06:42:33 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:16:20.682 06:42:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:20.682 06:42:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:20.682 06:42:33 -- common/autotest_common.sh@10 -- # set +x 00:16:20.682 ************************************ 00:16:20.682 START TEST nvme_xnvme 00:16:20.682 ************************************ 00:16:20.682 06:42:33 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:16:20.942 * Looking for test storage... 00:16:20.942 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:16:20.942 06:42:33 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:20.942 06:42:33 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:16:20.942 06:42:33 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:20.942 06:42:33 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:20.942 06:42:33 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:20.942 06:42:33 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:20.942 06:42:33 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:20.942 06:42:33 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:16:20.942 06:42:33 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:16:20.942 06:42:33 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:16:20.942 06:42:33 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:16:20.942 06:42:33 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:16:20.942 06:42:33 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:16:20.942 06:42:33 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:16:20.942 06:42:33 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:20.942 06:42:33 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:16:20.942 06:42:33 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:16:20.942 06:42:33 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:20.942 06:42:33 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:20.942 06:42:33 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:16:20.942 06:42:33 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:16:20.942 06:42:33 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:20.942 06:42:33 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:16:20.943 06:42:33 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:16:20.943 06:42:33 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:16:20.943 06:42:33 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:16:20.943 06:42:33 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:20.943 06:42:33 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:16:20.943 06:42:33 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:16:20.943 06:42:33 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:20.943 06:42:33 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:20.943 06:42:33 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:16:20.943 06:42:33 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:20.943 06:42:33 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:20.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.943 --rc genhtml_branch_coverage=1 00:16:20.943 --rc genhtml_function_coverage=1 00:16:20.943 --rc genhtml_legend=1 00:16:20.943 --rc geninfo_all_blocks=1 00:16:20.943 --rc geninfo_unexecuted_blocks=1 00:16:20.943 00:16:20.943 ' 00:16:20.943 06:42:33 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:20.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.943 --rc genhtml_branch_coverage=1 00:16:20.943 --rc genhtml_function_coverage=1 00:16:20.943 --rc genhtml_legend=1 00:16:20.943 --rc geninfo_all_blocks=1 00:16:20.943 --rc geninfo_unexecuted_blocks=1 00:16:20.943 00:16:20.943 ' 00:16:20.943 06:42:33 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:20.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.943 --rc genhtml_branch_coverage=1 00:16:20.943 --rc genhtml_function_coverage=1 00:16:20.943 --rc genhtml_legend=1 00:16:20.943 --rc geninfo_all_blocks=1 00:16:20.943 --rc geninfo_unexecuted_blocks=1 00:16:20.943 00:16:20.943 ' 00:16:20.943 06:42:33 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:20.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.943 --rc genhtml_branch_coverage=1 00:16:20.943 --rc genhtml_function_coverage=1 00:16:20.943 --rc genhtml_legend=1 00:16:20.943 --rc geninfo_all_blocks=1 00:16:20.943 --rc geninfo_unexecuted_blocks=1 00:16:20.943 00:16:20.943 ' 00:16:20.943 06:42:33 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:16:20.943 06:42:33 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:16:20.943 06:42:33 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:16:20.943 06:42:33 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:16:20.943 06:42:33 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:16:20.943 06:42:33 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:16:20.943 06:42:33 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:16:20.943 06:42:33 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:16:20.943 06:42:33 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:16:20.943 06:42:33 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:16:20.943 06:42:33 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:16:20.944 06:42:33 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:16:20.944 06:42:33 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:16:20.944 06:42:33 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:16:20.944 06:42:33 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:16:20.944 06:42:33 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:16:20.944 06:42:33 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:16:20.944 06:42:33 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:16:20.944 06:42:33 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:16:20.944 06:42:33 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:16:20.944 06:42:33 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:16:20.944 06:42:33 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:16:20.944 06:42:33 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:16:20.944 06:42:33 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:16:20.944 06:42:33 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:16:20.944 06:42:33 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:16:20.944 06:42:33 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:16:20.944 06:42:33 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:16:20.944 06:42:33 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:16:20.944 06:42:33 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:16:20.944 #define SPDK_CONFIG_H 00:16:20.944 #define SPDK_CONFIG_AIO_FSDEV 1 00:16:20.944 #define SPDK_CONFIG_APPS 1 00:16:20.944 #define SPDK_CONFIG_ARCH native 00:16:20.944 #define SPDK_CONFIG_ASAN 1 00:16:20.944 #undef SPDK_CONFIG_AVAHI 00:16:20.944 #undef SPDK_CONFIG_CET 00:16:20.944 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:16:20.944 #define SPDK_CONFIG_COVERAGE 1 00:16:20.944 #define SPDK_CONFIG_CROSS_PREFIX 00:16:20.944 #undef SPDK_CONFIG_CRYPTO 00:16:20.944 #undef SPDK_CONFIG_CRYPTO_MLX5 00:16:20.944 #undef SPDK_CONFIG_CUSTOMOCF 00:16:20.944 #undef SPDK_CONFIG_DAOS 00:16:20.944 #define SPDK_CONFIG_DAOS_DIR 00:16:20.944 #define SPDK_CONFIG_DEBUG 1 00:16:20.944 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:16:20.944 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:16:20.944 #define SPDK_CONFIG_DPDK_INC_DIR 00:16:20.944 #define SPDK_CONFIG_DPDK_LIB_DIR 00:16:20.944 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:16:20.944 #undef SPDK_CONFIG_DPDK_UADK 00:16:20.944 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:16:20.944 #define SPDK_CONFIG_EXAMPLES 1 00:16:20.944 #undef SPDK_CONFIG_FC 00:16:20.944 #define SPDK_CONFIG_FC_PATH 00:16:20.944 #define SPDK_CONFIG_FIO_PLUGIN 1 00:16:20.944 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:16:20.944 #define SPDK_CONFIG_FSDEV 1 00:16:20.944 #undef SPDK_CONFIG_FUSE 00:16:20.944 #undef SPDK_CONFIG_FUZZER 00:16:20.944 #define SPDK_CONFIG_FUZZER_LIB 00:16:20.944 #undef SPDK_CONFIG_GOLANG 00:16:20.944 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:16:20.944 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:16:20.944 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:16:20.944 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:16:20.944 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:16:20.944 #undef SPDK_CONFIG_HAVE_LIBBSD 00:16:20.944 #undef SPDK_CONFIG_HAVE_LZ4 00:16:20.944 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:16:20.944 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:16:20.944 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:16:20.944 #define SPDK_CONFIG_IDXD 1 00:16:20.944 #define SPDK_CONFIG_IDXD_KERNEL 1 00:16:20.944 #undef SPDK_CONFIG_IPSEC_MB 00:16:20.944 #define SPDK_CONFIG_IPSEC_MB_DIR 00:16:20.944 #define SPDK_CONFIG_ISAL 1 00:16:20.944 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:16:20.944 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:16:20.944 #define SPDK_CONFIG_LIBDIR 00:16:20.944 #undef SPDK_CONFIG_LTO 00:16:20.944 #define SPDK_CONFIG_MAX_LCORES 128 00:16:20.944 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:16:20.944 #define SPDK_CONFIG_NVME_CUSE 1 00:16:20.944 #undef SPDK_CONFIG_OCF 00:16:20.944 #define SPDK_CONFIG_OCF_PATH 00:16:20.944 #define SPDK_CONFIG_OPENSSL_PATH 00:16:20.944 #undef SPDK_CONFIG_PGO_CAPTURE 00:16:20.944 #define SPDK_CONFIG_PGO_DIR 00:16:20.944 #undef SPDK_CONFIG_PGO_USE 00:16:20.944 #define SPDK_CONFIG_PREFIX /usr/local 00:16:20.944 #undef SPDK_CONFIG_RAID5F 00:16:20.944 #undef SPDK_CONFIG_RBD 00:16:20.944 #define SPDK_CONFIG_RDMA 1 00:16:20.944 #define SPDK_CONFIG_RDMA_PROV verbs 00:16:20.944 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:16:20.944 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:16:20.944 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:16:20.944 #define SPDK_CONFIG_SHARED 1 00:16:20.944 #undef SPDK_CONFIG_SMA 00:16:20.944 #define SPDK_CONFIG_TESTS 1 00:16:20.944 #undef SPDK_CONFIG_TSAN 00:16:20.944 #define SPDK_CONFIG_UBLK 1 00:16:20.944 #define SPDK_CONFIG_UBSAN 1 00:16:20.944 #undef SPDK_CONFIG_UNIT_TESTS 00:16:20.944 #undef SPDK_CONFIG_URING 00:16:20.944 #define SPDK_CONFIG_URING_PATH 00:16:20.944 #undef SPDK_CONFIG_URING_ZNS 00:16:20.944 #undef SPDK_CONFIG_USDT 00:16:20.944 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:16:20.944 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:16:20.944 #undef SPDK_CONFIG_VFIO_USER 00:16:20.944 #define SPDK_CONFIG_VFIO_USER_DIR 00:16:20.944 #define SPDK_CONFIG_VHOST 1 00:16:20.944 #define SPDK_CONFIG_VIRTIO 1 00:16:20.944 #undef SPDK_CONFIG_VTUNE 00:16:20.944 #define SPDK_CONFIG_VTUNE_DIR 00:16:20.944 #define SPDK_CONFIG_WERROR 1 00:16:20.944 #define SPDK_CONFIG_WPDK_DIR 00:16:20.944 #define SPDK_CONFIG_XNVME 1 00:16:20.944 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:16:20.944 06:42:33 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:16:20.944 06:42:33 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:20.944 06:42:33 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:16:20.944 06:42:33 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:20.944 06:42:33 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:20.944 06:42:33 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:20.944 06:42:33 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.944 06:42:33 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.944 06:42:33 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.944 06:42:33 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:16:20.944 06:42:33 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.944 06:42:33 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:16:20.944 06:42:33 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:16:20.944 06:42:33 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:16:20.944 06:42:33 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:16:20.944 06:42:33 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:16:20.944 06:42:33 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:16:20.944 06:42:33 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:16:20.944 06:42:33 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:16:20.944 06:42:33 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:16:20.944 06:42:33 nvme_xnvme -- pm/common@68 -- # uname -s 00:16:20.944 06:42:33 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:16:20.944 06:42:33 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:16:20.944 06:42:33 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:16:20.944 06:42:33 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:16:20.944 06:42:33 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:16:20.944 06:42:33 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:16:20.944 06:42:33 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:16:20.944 06:42:33 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:16:20.944 06:42:33 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:16:20.944 06:42:33 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:16:20.944 06:42:33 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:16:20.944 06:42:33 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:16:20.944 06:42:33 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:16:20.944 06:42:33 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:16:20.944 06:42:33 nvme_xnvme -- common/autotest_common.sh@58 -- # : 0 00:16:20.944 06:42:33 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:16:20.944 06:42:33 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:16:20.944 06:42:33 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:16:20.944 06:42:33 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:16:20.944 06:42:33 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:16:20.945 06:42:33 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 68876 ]] 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 68876 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.ZXrGXP 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.ZXrGXP/tests/xnvme /tmp/spdk.ZXrGXP 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13976326144 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5592084480 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6260633600 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265397248 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493366272 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506162176 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13976326144 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5592084480 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6265245696 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265397248 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=151552 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:16:20.946 06:42:33 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:16:20.947 06:42:33 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:16:20.947 06:42:33 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:16:20.947 06:42:33 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:20.947 06:42:33 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:16:20.947 06:42:33 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:16:20.947 06:42:33 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253064704 00:16:20.947 06:42:33 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253076992 00:16:20.947 06:42:33 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:16:20.947 06:42:33 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:20.947 06:42:33 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt/output 00:16:20.947 06:42:33 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:16:20.947 06:42:33 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91294269440 00:16:20.947 06:42:33 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:16:20.947 06:42:33 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=8408510464 00:16:20.947 06:42:33 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:20.947 06:42:33 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:16:20.947 * Looking for test storage... 00:16:20.947 06:42:33 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:16:20.947 06:42:33 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:16:20.947 06:42:33 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:16:20.947 06:42:33 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:16:20.947 06:42:33 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:16:20.947 06:42:33 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13976326144 00:16:20.947 06:42:33 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:16:20.947 06:42:33 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:16:20.947 06:42:33 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:16:20.947 06:42:33 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:16:20.947 06:42:33 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:16:20.947 06:42:33 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:16:20.947 06:42:33 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:16:20.947 06:42:33 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:16:20.947 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:16:20.947 06:42:33 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:16:20.947 06:42:33 nvme_xnvme -- common/autotest_common.sh@1698 -- # set -o errtrace 00:16:20.947 06:42:33 nvme_xnvme -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:16:20.947 06:42:33 nvme_xnvme -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:16:20.947 06:42:33 nvme_xnvme -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:16:20.947 06:42:33 nvme_xnvme -- common/autotest_common.sh@1703 -- # true 00:16:20.947 06:42:33 nvme_xnvme -- common/autotest_common.sh@1705 -- # xtrace_fd 00:16:20.947 06:42:33 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:16:20.947 06:42:33 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:16:20.947 06:42:33 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:16:20.947 06:42:33 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:16:20.947 06:42:33 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:16:20.947 06:42:33 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:16:20.947 06:42:33 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:16:20.947 06:42:33 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:16:20.947 06:42:33 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:20.947 06:42:33 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:16:20.947 06:42:33 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:20.947 06:42:33 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:20.947 06:42:33 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:20.947 06:42:33 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:20.947 06:42:33 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:20.947 06:42:33 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:16:20.947 06:42:33 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:16:20.947 06:42:33 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:16:20.947 06:42:33 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:16:20.947 06:42:33 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:16:20.947 06:42:33 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:16:20.947 06:42:33 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:16:20.947 06:42:33 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:20.947 06:42:33 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:16:20.947 06:42:33 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:16:20.947 06:42:33 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:20.947 06:42:33 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:20.947 06:42:33 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:16:20.947 06:42:33 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:16:20.947 06:42:33 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:20.947 06:42:33 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:16:20.947 06:42:33 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:16:20.947 06:42:33 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:16:20.947 06:42:33 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:16:20.947 06:42:33 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:20.947 06:42:33 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:16:20.947 06:42:33 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:16:20.947 06:42:33 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:20.947 06:42:33 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:20.947 06:42:33 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:16:20.947 06:42:33 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:20.947 06:42:33 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:20.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.947 --rc genhtml_branch_coverage=1 00:16:20.947 --rc genhtml_function_coverage=1 00:16:20.947 --rc genhtml_legend=1 00:16:20.947 --rc geninfo_all_blocks=1 00:16:20.947 --rc geninfo_unexecuted_blocks=1 00:16:20.947 00:16:20.947 ' 00:16:20.947 06:42:33 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:20.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.947 --rc genhtml_branch_coverage=1 00:16:20.947 --rc genhtml_function_coverage=1 00:16:20.947 --rc genhtml_legend=1 00:16:20.947 --rc geninfo_all_blocks=1 00:16:20.947 --rc geninfo_unexecuted_blocks=1 00:16:20.947 00:16:20.947 ' 00:16:20.947 06:42:33 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:20.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.947 --rc genhtml_branch_coverage=1 00:16:20.947 --rc genhtml_function_coverage=1 00:16:20.947 --rc genhtml_legend=1 00:16:20.947 --rc geninfo_all_blocks=1 00:16:20.947 --rc geninfo_unexecuted_blocks=1 00:16:20.947 00:16:20.947 ' 00:16:20.947 06:42:33 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:20.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.947 --rc genhtml_branch_coverage=1 00:16:20.947 --rc genhtml_function_coverage=1 00:16:20.947 --rc genhtml_legend=1 00:16:20.947 --rc geninfo_all_blocks=1 00:16:20.947 --rc geninfo_unexecuted_blocks=1 00:16:20.947 00:16:20.947 ' 00:16:20.947 06:42:33 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:20.947 06:42:33 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:16:20.947 06:42:33 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:20.947 06:42:33 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:20.947 06:42:33 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:20.947 06:42:33 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.948 06:42:33 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.948 06:42:33 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.948 06:42:33 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:16:20.948 06:42:33 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.948 06:42:33 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:16:20.948 06:42:33 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:16:20.948 06:42:33 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:16:20.948 06:42:33 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:16:20.948 06:42:33 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:16:20.948 06:42:33 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:16:20.948 06:42:33 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:16:20.948 06:42:33 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:16:20.948 06:42:33 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:16:20.948 06:42:33 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:16:20.948 06:42:33 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:16:20.948 06:42:33 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:16:20.948 06:42:33 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:16:20.948 06:42:33 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:16:20.948 06:42:33 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:16:20.948 06:42:33 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:16:20.948 06:42:33 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:16:20.948 06:42:33 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:16:20.948 06:42:33 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:16:20.948 06:42:33 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:16:20.948 06:42:33 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:16:20.948 06:42:33 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:21.204 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:21.461 Waiting for block devices as requested 00:16:21.461 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:21.461 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:21.718 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:16:21.718 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:16:26.977 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:16:26.977 06:42:39 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:16:26.977 06:42:39 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:16:26.977 06:42:39 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:16:27.234 06:42:39 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:16:27.234 06:42:39 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:16:27.234 06:42:39 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:16:27.234 06:42:39 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:16:27.234 06:42:39 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:16:27.234 No valid GPT data, bailing 00:16:27.234 06:42:39 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:16:27.234 06:42:39 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:16:27.234 06:42:39 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:16:27.234 06:42:39 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:16:27.234 06:42:39 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:16:27.234 06:42:39 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:16:27.234 06:42:39 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:16:27.234 06:42:39 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:16:27.234 06:42:39 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:16:27.234 06:42:39 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:16:27.234 06:42:39 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:16:27.234 06:42:39 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:16:27.234 06:42:39 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:16:27.234 06:42:39 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:16:27.234 06:42:39 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:16:27.234 06:42:39 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:16:27.234 06:42:39 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:16:27.234 06:42:39 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:27.234 06:42:39 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:27.234 06:42:39 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:27.234 ************************************ 00:16:27.234 START TEST xnvme_rpc 00:16:27.234 ************************************ 00:16:27.234 06:42:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:16:27.234 06:42:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:16:27.234 06:42:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:16:27.234 06:42:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:16:27.234 06:42:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:16:27.234 06:42:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69264 00:16:27.234 06:42:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69264 00:16:27.234 06:42:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69264 ']' 00:16:27.234 06:42:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:27.234 06:42:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:27.234 06:42:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:27.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:27.234 06:42:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:27.234 06:42:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:27.234 06:42:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.505 [2024-12-06 06:42:39.984926] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:16:27.505 [2024-12-06 06:42:39.985053] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69264 ] 00:16:27.505 [2024-12-06 06:42:40.143974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.505 [2024-12-06 06:42:40.243534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.440 06:42:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:28.440 06:42:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:28.440 06:42:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:16:28.440 06:42:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.440 06:42:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:28.440 xnvme_bdev 00:16:28.440 06:42:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.440 06:42:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:16:28.440 06:42:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:16:28.440 06:42:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:28.440 06:42:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.440 06:42:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:28.440 06:42:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.440 06:42:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:16:28.440 06:42:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:16:28.440 06:42:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:28.440 06:42:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:16:28.440 06:42:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.440 06:42:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:28.440 06:42:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.440 06:42:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:16:28.440 06:42:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:16:28.440 06:42:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:28.440 06:42:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.440 06:42:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:28.440 06:42:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:16:28.440 06:42:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.440 06:42:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:16:28.440 06:42:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:16:28.440 06:42:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:16:28.440 06:42:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:28.440 06:42:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.440 06:42:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:28.440 06:42:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.440 06:42:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:16:28.440 06:42:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:16:28.440 06:42:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.440 06:42:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:28.440 06:42:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.440 06:42:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69264 00:16:28.440 06:42:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69264 ']' 00:16:28.440 06:42:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69264 00:16:28.440 06:42:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:16:28.440 06:42:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:28.440 06:42:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69264 00:16:28.440 06:42:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:28.440 06:42:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:28.440 killing process with pid 69264 00:16:28.440 06:42:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69264' 00:16:28.440 06:42:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69264 00:16:28.440 06:42:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69264 00:16:29.809 00:16:29.809 real 0m2.342s 00:16:29.809 user 0m2.414s 00:16:29.809 sys 0m0.362s 00:16:29.809 06:42:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:29.809 ************************************ 00:16:29.809 END TEST xnvme_rpc 00:16:29.809 ************************************ 00:16:29.809 06:42:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:29.809 06:42:42 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:16:29.809 06:42:42 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:29.809 06:42:42 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:29.809 06:42:42 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:29.809 ************************************ 00:16:29.809 START TEST xnvme_bdevperf 00:16:29.809 ************************************ 00:16:29.809 06:42:42 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:16:29.809 06:42:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:16:29.809 06:42:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:16:29.809 06:42:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:29.809 06:42:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:16:29.809 06:42:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:29.809 06:42:42 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:29.809 06:42:42 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:29.809 { 00:16:29.809 "subsystems": [ 00:16:29.809 { 00:16:29.809 "subsystem": "bdev", 00:16:29.809 "config": [ 00:16:29.809 { 00:16:29.809 "params": { 00:16:29.809 "io_mechanism": "libaio", 00:16:29.809 "conserve_cpu": false, 00:16:29.809 "filename": "/dev/nvme0n1", 00:16:29.809 "name": "xnvme_bdev" 00:16:29.809 }, 00:16:29.809 "method": "bdev_xnvme_create" 00:16:29.809 }, 00:16:29.809 { 00:16:29.809 "method": "bdev_wait_for_examine" 00:16:29.809 } 00:16:29.809 ] 00:16:29.809 } 00:16:29.809 ] 00:16:29.809 } 00:16:29.809 [2024-12-06 06:42:42.337697] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:16:29.809 [2024-12-06 06:42:42.337803] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69335 ] 00:16:29.809 [2024-12-06 06:42:42.492977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:30.066 [2024-12-06 06:42:42.597570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:30.323 Running I/O for 5 seconds... 00:16:32.266 38416.00 IOPS, 150.06 MiB/s [2024-12-06T06:42:45.961Z] 37376.50 IOPS, 146.00 MiB/s [2024-12-06T06:42:46.892Z] 36784.67 IOPS, 143.69 MiB/s [2024-12-06T06:42:48.265Z] 36628.25 IOPS, 143.08 MiB/s [2024-12-06T06:42:48.265Z] 36539.80 IOPS, 142.73 MiB/s 00:16:35.524 Latency(us) 00:16:35.524 [2024-12-06T06:42:48.265Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:35.525 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:16:35.525 xnvme_bdev : 5.00 36518.40 142.65 0.00 0.00 1748.20 341.86 8519.68 00:16:35.525 [2024-12-06T06:42:48.266Z] =================================================================================================================== 00:16:35.525 [2024-12-06T06:42:48.266Z] Total : 36518.40 142.65 0.00 0.00 1748.20 341.86 8519.68 00:16:36.091 06:42:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:36.091 06:42:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:36.091 06:42:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:16:36.091 06:42:48 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:36.091 06:42:48 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:36.091 { 00:16:36.091 "subsystems": [ 00:16:36.091 { 00:16:36.091 "subsystem": "bdev", 00:16:36.091 "config": [ 00:16:36.091 { 00:16:36.091 "params": { 00:16:36.091 "io_mechanism": "libaio", 00:16:36.091 "conserve_cpu": false, 00:16:36.091 "filename": "/dev/nvme0n1", 00:16:36.091 "name": "xnvme_bdev" 00:16:36.091 }, 00:16:36.091 "method": "bdev_xnvme_create" 00:16:36.091 }, 00:16:36.091 { 00:16:36.091 "method": "bdev_wait_for_examine" 00:16:36.091 } 00:16:36.091 ] 00:16:36.091 } 00:16:36.091 ] 00:16:36.091 } 00:16:36.091 [2024-12-06 06:42:48.676712] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:16:36.091 [2024-12-06 06:42:48.676823] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69405 ] 00:16:36.349 [2024-12-06 06:42:48.834559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:36.349 [2024-12-06 06:42:48.936045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:36.606 Running I/O for 5 seconds... 00:16:38.474 35445.00 IOPS, 138.46 MiB/s [2024-12-06T06:42:52.590Z] 35930.00 IOPS, 140.35 MiB/s [2024-12-06T06:42:53.525Z] 36258.00 IOPS, 141.63 MiB/s [2024-12-06T06:42:54.460Z] 36667.75 IOPS, 143.23 MiB/s [2024-12-06T06:42:54.460Z] 36873.60 IOPS, 144.04 MiB/s 00:16:41.719 Latency(us) 00:16:41.719 [2024-12-06T06:42:54.460Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:41.719 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:16:41.719 xnvme_bdev : 5.00 36844.72 143.92 0.00 0.00 1732.03 233.16 5469.74 00:16:41.719 [2024-12-06T06:42:54.460Z] =================================================================================================================== 00:16:41.719 [2024-12-06T06:42:54.460Z] Total : 36844.72 143.92 0.00 0.00 1732.03 233.16 5469.74 00:16:42.286 00:16:42.286 real 0m12.663s 00:16:42.286 user 0m4.486s 00:16:42.286 sys 0m5.531s 00:16:42.286 06:42:54 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:42.286 ************************************ 00:16:42.286 END TEST xnvme_bdevperf 00:16:42.286 ************************************ 00:16:42.286 06:42:54 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:42.286 06:42:54 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:16:42.286 06:42:54 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:42.286 06:42:54 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:42.286 06:42:54 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:42.286 ************************************ 00:16:42.286 START TEST xnvme_fio_plugin 00:16:42.286 ************************************ 00:16:42.286 06:42:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:16:42.286 06:42:54 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:16:42.286 06:42:54 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:16:42.286 06:42:54 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:42.286 06:42:54 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:42.286 06:42:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:42.286 06:42:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:42.286 06:42:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:42.286 06:42:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:42.286 06:42:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:42.286 06:42:54 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:42.286 06:42:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:42.286 06:42:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:42.286 06:42:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:42.286 06:42:54 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:42.286 06:42:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:42.286 06:42:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:42.286 06:42:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:42.286 06:42:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:42.286 06:42:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:42.286 06:42:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:42.286 06:42:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:42.286 06:42:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:42.286 06:42:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:42.286 { 00:16:42.286 "subsystems": [ 00:16:42.286 { 00:16:42.286 "subsystem": "bdev", 00:16:42.286 "config": [ 00:16:42.286 { 00:16:42.286 "params": { 00:16:42.286 "io_mechanism": "libaio", 00:16:42.286 "conserve_cpu": false, 00:16:42.286 "filename": "/dev/nvme0n1", 00:16:42.286 "name": "xnvme_bdev" 00:16:42.286 }, 00:16:42.286 "method": "bdev_xnvme_create" 00:16:42.286 }, 00:16:42.286 { 00:16:42.286 "method": "bdev_wait_for_examine" 00:16:42.286 } 00:16:42.286 ] 00:16:42.286 } 00:16:42.286 ] 00:16:42.286 } 00:16:42.542 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:42.543 fio-3.35 00:16:42.543 Starting 1 thread 00:16:49.087 00:16:49.087 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69524: Fri Dec 6 06:43:00 2024 00:16:49.087 read: IOPS=43.6k, BW=170MiB/s (179MB/s)(853MiB/5001msec) 00:16:49.087 slat (usec): min=3, max=1126, avg=19.55, stdev=26.24 00:16:49.087 clat (usec): min=49, max=5356, avg=867.94, stdev=532.85 00:16:49.087 lat (usec): min=115, max=5397, avg=887.48, stdev=536.60 00:16:49.087 clat percentiles (usec): 00:16:49.087 | 1.00th=[ 174], 5.00th=[ 253], 10.00th=[ 326], 20.00th=[ 445], 00:16:49.087 | 30.00th=[ 553], 40.00th=[ 652], 50.00th=[ 758], 60.00th=[ 873], 00:16:49.087 | 70.00th=[ 1004], 80.00th=[ 1188], 90.00th=[ 1549], 95.00th=[ 1926], 00:16:49.087 | 99.00th=[ 2769], 99.50th=[ 3064], 99.90th=[ 3621], 99.95th=[ 3818], 00:16:49.087 | 99.99th=[ 4293] 00:16:49.087 bw ( KiB/s): min=155984, max=188288, per=98.95%, avg=172749.33, stdev=10027.75, samples=9 00:16:49.087 iops : min=38996, max=47072, avg=43187.33, stdev=2506.94, samples=9 00:16:49.087 lat (usec) : 50=0.01%, 100=0.01%, 250=4.78%, 500=20.34%, 750=24.52% 00:16:49.087 lat (usec) : 1000=20.20% 00:16:49.087 lat (msec) : 2=25.78%, 4=4.35%, 10=0.03% 00:16:49.087 cpu : usr=28.12%, sys=51.70%, ctx=134, majf=0, minf=764 00:16:49.087 IO depths : 1=0.2%, 2=1.5%, 4=4.6%, 8=11.0%, 16=25.1%, 32=55.8%, >=64=1.8% 00:16:49.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:49.087 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:16:49.087 issued rwts: total=218270,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:49.087 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:49.087 00:16:49.087 Run status group 0 (all jobs): 00:16:49.087 READ: bw=170MiB/s (179MB/s), 170MiB/s-170MiB/s (179MB/s-179MB/s), io=853MiB (894MB), run=5001-5001msec 00:16:49.087 ----------------------------------------------------- 00:16:49.087 Suppressions used: 00:16:49.087 count bytes template 00:16:49.087 1 11 /usr/src/fio/parse.c 00:16:49.087 1 8 libtcmalloc_minimal.so 00:16:49.087 1 904 libcrypto.so 00:16:49.087 ----------------------------------------------------- 00:16:49.087 00:16:49.087 06:43:01 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:49.087 06:43:01 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:49.087 06:43:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:49.087 06:43:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:49.087 06:43:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:49.087 06:43:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:49.087 06:43:01 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:49.087 06:43:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:49.087 06:43:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:49.087 06:43:01 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:49.087 06:43:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:49.087 06:43:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:49.087 06:43:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:49.087 06:43:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:49.087 06:43:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:49.087 06:43:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:49.087 06:43:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:49.087 06:43:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:49.087 06:43:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:49.087 06:43:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:49.087 06:43:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:49.087 { 00:16:49.087 "subsystems": [ 00:16:49.087 { 00:16:49.087 "subsystem": "bdev", 00:16:49.087 "config": [ 00:16:49.087 { 00:16:49.087 "params": { 00:16:49.087 "io_mechanism": "libaio", 00:16:49.087 "conserve_cpu": false, 00:16:49.087 "filename": "/dev/nvme0n1", 00:16:49.087 "name": "xnvme_bdev" 00:16:49.087 }, 00:16:49.087 "method": "bdev_xnvme_create" 00:16:49.087 }, 00:16:49.087 { 00:16:49.087 "method": "bdev_wait_for_examine" 00:16:49.087 } 00:16:49.087 ] 00:16:49.087 } 00:16:49.087 ] 00:16:49.087 } 00:16:49.345 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:49.345 fio-3.35 00:16:49.345 Starting 1 thread 00:16:55.897 00:16:55.897 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69617: Fri Dec 6 06:43:07 2024 00:16:55.897 write: IOPS=43.2k, BW=169MiB/s (177MB/s)(845MiB/5001msec); 0 zone resets 00:16:55.897 slat (usec): min=3, max=711, avg=19.35, stdev=24.65 00:16:55.897 clat (usec): min=45, max=65907, avg=879.28, stdev=1396.72 00:16:55.897 lat (usec): min=83, max=65914, avg=898.63, stdev=1398.09 00:16:55.897 clat percentiles (usec): 00:16:55.897 | 1.00th=[ 169], 5.00th=[ 241], 10.00th=[ 310], 20.00th=[ 433], 00:16:55.897 | 30.00th=[ 537], 40.00th=[ 635], 50.00th=[ 742], 60.00th=[ 857], 00:16:55.897 | 70.00th=[ 979], 80.00th=[ 1139], 90.00th=[ 1418], 95.00th=[ 1926], 00:16:55.897 | 99.00th=[ 2933], 99.50th=[ 3228], 99.90th=[ 4293], 99.95th=[30802], 00:16:55.897 | 99.99th=[64750] 00:16:55.897 bw ( KiB/s): min=153816, max=185992, per=100.00%, avg=173513.44, stdev=10389.63, samples=9 00:16:55.897 iops : min=38454, max=46498, avg=43378.33, stdev=2597.44, samples=9 00:16:55.897 lat (usec) : 50=0.01%, 100=0.01%, 250=5.65%, 500=20.88%, 750=24.29% 00:16:55.897 lat (usec) : 1000=20.68% 00:16:55.897 lat (msec) : 2=23.95%, 4=4.41%, 10=0.04%, 20=0.01%, 50=0.06% 00:16:55.897 lat (msec) : 100=0.03% 00:16:55.897 cpu : usr=30.80%, sys=51.82%, ctx=87, majf=0, minf=765 00:16:55.897 IO depths : 1=0.2%, 2=1.5%, 4=4.8%, 8=11.5%, 16=25.2%, 32=55.0%, >=64=1.8% 00:16:55.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.897 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:16:55.897 issued rwts: total=0,216286,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.897 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:55.897 00:16:55.897 Run status group 0 (all jobs): 00:16:55.897 WRITE: bw=169MiB/s (177MB/s), 169MiB/s-169MiB/s (177MB/s-177MB/s), io=845MiB (886MB), run=5001-5001msec 00:16:55.897 ----------------------------------------------------- 00:16:55.897 Suppressions used: 00:16:55.897 count bytes template 00:16:55.897 1 11 /usr/src/fio/parse.c 00:16:55.897 1 8 libtcmalloc_minimal.so 00:16:55.898 1 904 libcrypto.so 00:16:55.898 ----------------------------------------------------- 00:16:55.898 00:16:55.898 00:16:55.898 real 0m13.539s 00:16:55.898 user 0m5.604s 00:16:55.898 sys 0m5.658s 00:16:55.898 06:43:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:55.898 ************************************ 00:16:55.898 END TEST xnvme_fio_plugin 00:16:55.898 ************************************ 00:16:55.898 06:43:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:55.898 06:43:08 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:16:55.898 06:43:08 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:16:55.898 06:43:08 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:16:55.898 06:43:08 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:16:55.898 06:43:08 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:55.898 06:43:08 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:55.898 06:43:08 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:55.898 ************************************ 00:16:55.898 START TEST xnvme_rpc 00:16:55.898 ************************************ 00:16:55.898 06:43:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:16:55.898 06:43:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:16:55.898 06:43:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:16:55.898 06:43:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:16:55.898 06:43:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:16:55.898 06:43:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69703 00:16:55.898 06:43:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69703 00:16:55.898 06:43:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69703 ']' 00:16:55.898 06:43:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:55.898 06:43:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:55.898 06:43:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:55.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:55.898 06:43:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:55.898 06:43:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.898 06:43:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:56.156 [2024-12-06 06:43:08.656370] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:16:56.156 [2024-12-06 06:43:08.656499] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69703 ] 00:16:56.156 [2024-12-06 06:43:08.817677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.413 [2024-12-06 06:43:08.917146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:57.025 06:43:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:57.025 06:43:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:57.025 06:43:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:16:57.025 06:43:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.025 06:43:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.025 xnvme_bdev 00:16:57.025 06:43:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.025 06:43:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:16:57.025 06:43:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:57.025 06:43:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.025 06:43:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:16:57.025 06:43:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.025 06:43:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.025 06:43:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:16:57.025 06:43:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:16:57.025 06:43:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:57.025 06:43:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.025 06:43:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.025 06:43:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:16:57.025 06:43:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.025 06:43:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:16:57.025 06:43:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:16:57.026 06:43:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:16:57.026 06:43:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:57.026 06:43:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.026 06:43:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.026 06:43:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.026 06:43:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:16:57.026 06:43:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:16:57.026 06:43:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:16:57.026 06:43:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:57.026 06:43:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.026 06:43:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.026 06:43:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.026 06:43:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:16:57.026 06:43:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:16:57.026 06:43:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.026 06:43:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.026 06:43:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.026 06:43:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69703 00:16:57.026 06:43:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69703 ']' 00:16:57.026 06:43:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69703 00:16:57.026 06:43:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:16:57.026 06:43:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:57.026 06:43:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69703 00:16:57.026 06:43:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:57.026 06:43:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:57.026 killing process with pid 69703 00:16:57.026 06:43:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69703' 00:16:57.026 06:43:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69703 00:16:57.026 06:43:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69703 00:16:58.928 00:16:58.928 real 0m2.634s 00:16:58.928 user 0m2.673s 00:16:58.928 sys 0m0.358s 00:16:58.928 06:43:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:58.928 06:43:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:58.928 ************************************ 00:16:58.928 END TEST xnvme_rpc 00:16:58.928 ************************************ 00:16:58.928 06:43:11 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:16:58.928 06:43:11 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:58.928 06:43:11 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:58.928 06:43:11 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:58.928 ************************************ 00:16:58.928 START TEST xnvme_bdevperf 00:16:58.928 ************************************ 00:16:58.928 06:43:11 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:16:58.928 06:43:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:16:58.928 06:43:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:16:58.928 06:43:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:58.928 06:43:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:16:58.928 06:43:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:58.928 06:43:11 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:58.928 06:43:11 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:58.928 { 00:16:58.928 "subsystems": [ 00:16:58.928 { 00:16:58.928 "subsystem": "bdev", 00:16:58.928 "config": [ 00:16:58.928 { 00:16:58.928 "params": { 00:16:58.928 "io_mechanism": "libaio", 00:16:58.928 "conserve_cpu": true, 00:16:58.928 "filename": "/dev/nvme0n1", 00:16:58.928 "name": "xnvme_bdev" 00:16:58.928 }, 00:16:58.928 "method": "bdev_xnvme_create" 00:16:58.928 }, 00:16:58.928 { 00:16:58.928 "method": "bdev_wait_for_examine" 00:16:58.928 } 00:16:58.928 ] 00:16:58.928 } 00:16:58.928 ] 00:16:58.928 } 00:16:58.928 [2024-12-06 06:43:11.304995] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:16:58.928 [2024-12-06 06:43:11.305114] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69772 ] 00:16:58.928 [2024-12-06 06:43:11.465799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.928 [2024-12-06 06:43:11.567500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:59.185 Running I/O for 5 seconds... 00:17:01.178 36203.00 IOPS, 141.42 MiB/s [2024-12-06T06:43:14.858Z] 37514.00 IOPS, 146.54 MiB/s [2024-12-06T06:43:16.228Z] 37531.33 IOPS, 146.61 MiB/s [2024-12-06T06:43:17.161Z] 37375.25 IOPS, 146.00 MiB/s 00:17:04.420 Latency(us) 00:17:04.420 [2024-12-06T06:43:17.161Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:04.420 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:17:04.420 xnvme_bdev : 5.00 37848.26 147.84 0.00 0.00 1686.65 174.87 8620.50 00:17:04.420 [2024-12-06T06:43:17.161Z] =================================================================================================================== 00:17:04.420 [2024-12-06T06:43:17.161Z] Total : 37848.26 147.84 0.00 0.00 1686.65 174.87 8620.50 00:17:04.987 06:43:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:04.987 06:43:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:17:04.987 06:43:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:04.987 06:43:17 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:04.987 06:43:17 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:04.987 { 00:17:04.987 "subsystems": [ 00:17:04.987 { 00:17:04.987 "subsystem": "bdev", 00:17:04.987 "config": [ 00:17:04.987 { 00:17:04.987 "params": { 00:17:04.987 "io_mechanism": "libaio", 00:17:04.987 "conserve_cpu": true, 00:17:04.987 "filename": "/dev/nvme0n1", 00:17:04.987 "name": "xnvme_bdev" 00:17:04.987 }, 00:17:04.987 "method": "bdev_xnvme_create" 00:17:04.987 }, 00:17:04.987 { 00:17:04.987 "method": "bdev_wait_for_examine" 00:17:04.987 } 00:17:04.987 ] 00:17:04.987 } 00:17:04.987 ] 00:17:04.987 } 00:17:04.987 [2024-12-06 06:43:17.637270] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:17:04.987 [2024-12-06 06:43:17.637390] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69847 ] 00:17:05.245 [2024-12-06 06:43:17.798561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.245 [2024-12-06 06:43:17.897152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:05.503 Running I/O for 5 seconds... 00:17:07.812 35302.00 IOPS, 137.90 MiB/s [2024-12-06T06:43:21.484Z] 35458.50 IOPS, 138.51 MiB/s [2024-12-06T06:43:22.416Z] 35353.00 IOPS, 138.10 MiB/s [2024-12-06T06:43:23.350Z] 35489.00 IOPS, 138.63 MiB/s 00:17:10.609 Latency(us) 00:17:10.609 [2024-12-06T06:43:23.350Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:10.609 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:17:10.609 xnvme_bdev : 5.00 35228.44 137.61 0.00 0.00 1811.87 164.63 5192.47 00:17:10.609 [2024-12-06T06:43:23.350Z] =================================================================================================================== 00:17:10.609 [2024-12-06T06:43:23.350Z] Total : 35228.44 137.61 0.00 0.00 1811.87 164.63 5192.47 00:17:11.174 00:17:11.174 real 0m12.652s 00:17:11.174 user 0m4.568s 00:17:11.174 sys 0m5.404s 00:17:11.174 06:43:23 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:11.174 ************************************ 00:17:11.174 END TEST xnvme_bdevperf 00:17:11.174 ************************************ 00:17:11.174 06:43:23 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:11.432 06:43:23 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:17:11.432 06:43:23 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:11.432 06:43:23 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:11.432 06:43:23 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:11.432 ************************************ 00:17:11.432 START TEST xnvme_fio_plugin 00:17:11.432 ************************************ 00:17:11.432 06:43:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:17:11.432 06:43:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:17:11.432 06:43:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:17:11.432 06:43:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:11.432 06:43:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:11.432 06:43:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:11.432 06:43:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:11.432 06:43:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:11.432 06:43:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:11.432 06:43:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:11.432 06:43:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:11.432 06:43:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:11.432 06:43:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:11.432 06:43:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:11.432 06:43:23 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:11.432 06:43:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:11.432 06:43:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:11.432 06:43:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:11.432 06:43:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:11.432 06:43:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:11.432 06:43:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:11.432 06:43:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:11.432 06:43:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:11.432 06:43:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:11.432 { 00:17:11.432 "subsystems": [ 00:17:11.432 { 00:17:11.432 "subsystem": "bdev", 00:17:11.432 "config": [ 00:17:11.432 { 00:17:11.432 "params": { 00:17:11.432 "io_mechanism": "libaio", 00:17:11.432 "conserve_cpu": true, 00:17:11.432 "filename": "/dev/nvme0n1", 00:17:11.432 "name": "xnvme_bdev" 00:17:11.432 }, 00:17:11.432 "method": "bdev_xnvme_create" 00:17:11.432 }, 00:17:11.432 { 00:17:11.432 "method": "bdev_wait_for_examine" 00:17:11.432 } 00:17:11.432 ] 00:17:11.432 } 00:17:11.432 ] 00:17:11.432 } 00:17:11.433 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:11.433 fio-3.35 00:17:11.433 Starting 1 thread 00:17:17.987 00:17:17.987 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69966: Fri Dec 6 06:43:29 2024 00:17:17.987 read: IOPS=42.1k, BW=165MiB/s (173MB/s)(823MiB/5001msec) 00:17:17.987 slat (usec): min=3, max=1997, avg=20.26, stdev=30.90 00:17:17.987 clat (usec): min=80, max=5265, avg=903.02, stdev=558.14 00:17:17.987 lat (usec): min=136, max=5334, avg=923.29, stdev=561.56 00:17:17.987 clat percentiles (usec): 00:17:17.987 | 1.00th=[ 165], 5.00th=[ 243], 10.00th=[ 318], 20.00th=[ 445], 00:17:17.987 | 30.00th=[ 562], 40.00th=[ 676], 50.00th=[ 799], 60.00th=[ 914], 00:17:17.987 | 70.00th=[ 1057], 80.00th=[ 1270], 90.00th=[ 1598], 95.00th=[ 1991], 00:17:17.987 | 99.00th=[ 2835], 99.50th=[ 3130], 99.90th=[ 3884], 99.95th=[ 4146], 00:17:17.987 | 99.99th=[ 4555] 00:17:17.987 bw ( KiB/s): min=153088, max=179256, per=100.00%, avg=168690.67, stdev=8393.41, samples=9 00:17:17.987 iops : min=38272, max=44814, avg=42172.67, stdev=2098.35, samples=9 00:17:17.987 lat (usec) : 100=0.01%, 250=5.42%, 500=19.15%, 750=21.66%, 1000=19.82% 00:17:17.987 lat (msec) : 2=29.09%, 4=4.77%, 10=0.08% 00:17:17.987 cpu : usr=27.38%, sys=53.40%, ctx=66, majf=0, minf=764 00:17:17.987 IO depths : 1=0.2%, 2=1.7%, 4=4.7%, 8=11.1%, 16=25.0%, 32=55.5%, >=64=1.8% 00:17:17.987 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:17.987 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:17:17.987 issued rwts: total=210756,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:17.987 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:17.987 00:17:17.987 Run status group 0 (all jobs): 00:17:17.987 READ: bw=165MiB/s (173MB/s), 165MiB/s-165MiB/s (173MB/s-173MB/s), io=823MiB (863MB), run=5001-5001msec 00:17:17.987 ----------------------------------------------------- 00:17:17.987 Suppressions used: 00:17:17.987 count bytes template 00:17:17.987 1 11 /usr/src/fio/parse.c 00:17:17.987 1 8 libtcmalloc_minimal.so 00:17:17.987 1 904 libcrypto.so 00:17:17.987 ----------------------------------------------------- 00:17:17.987 00:17:17.987 06:43:30 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:17.987 06:43:30 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:17.987 06:43:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:17.987 06:43:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:17.987 06:43:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:17.987 06:43:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:17.987 06:43:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:17.987 06:43:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:17.987 06:43:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:17.987 06:43:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:17.987 06:43:30 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:17.987 06:43:30 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:17.987 06:43:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:17.987 06:43:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:17.988 06:43:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:17.988 06:43:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:17.988 06:43:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:17.988 06:43:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:17.988 06:43:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:17.988 06:43:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:17.988 06:43:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:17.988 { 00:17:17.988 "subsystems": [ 00:17:17.988 { 00:17:17.988 "subsystem": "bdev", 00:17:17.988 "config": [ 00:17:17.988 { 00:17:17.988 "params": { 00:17:17.988 "io_mechanism": "libaio", 00:17:17.988 "conserve_cpu": true, 00:17:17.988 "filename": "/dev/nvme0n1", 00:17:17.988 "name": "xnvme_bdev" 00:17:17.988 }, 00:17:17.988 "method": "bdev_xnvme_create" 00:17:17.988 }, 00:17:17.988 { 00:17:17.988 "method": "bdev_wait_for_examine" 00:17:17.988 } 00:17:17.988 ] 00:17:17.988 } 00:17:17.988 ] 00:17:17.988 } 00:17:18.245 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:18.245 fio-3.35 00:17:18.245 Starting 1 thread 00:17:24.802 00:17:24.802 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70052: Fri Dec 6 06:43:36 2024 00:17:24.802 write: IOPS=42.2k, BW=165MiB/s (173MB/s)(825MiB/5001msec); 0 zone resets 00:17:24.802 slat (usec): min=3, max=752, avg=20.26, stdev=24.07 00:17:24.802 clat (usec): min=51, max=5240, avg=888.78, stdev=539.28 00:17:24.802 lat (usec): min=100, max=5347, avg=909.04, stdev=542.70 00:17:24.802 clat percentiles (usec): 00:17:24.802 | 1.00th=[ 169], 5.00th=[ 241], 10.00th=[ 318], 20.00th=[ 453], 00:17:24.802 | 30.00th=[ 570], 40.00th=[ 685], 50.00th=[ 799], 60.00th=[ 914], 00:17:24.802 | 70.00th=[ 1045], 80.00th=[ 1221], 90.00th=[ 1516], 95.00th=[ 1893], 00:17:24.802 | 99.00th=[ 2835], 99.50th=[ 3163], 99.90th=[ 3884], 99.95th=[ 4047], 00:17:24.802 | 99.99th=[ 4424] 00:17:24.802 bw ( KiB/s): min=161736, max=178944, per=100.00%, avg=170209.67, stdev=5622.57, samples=9 00:17:24.802 iops : min=40434, max=44736, avg=42552.33, stdev=1405.58, samples=9 00:17:24.802 lat (usec) : 100=0.01%, 250=5.56%, 500=18.37%, 750=21.80%, 1000=21.03% 00:17:24.802 lat (msec) : 2=29.02%, 4=4.15%, 10=0.07% 00:17:24.802 cpu : usr=27.54%, sys=53.02%, ctx=92, majf=0, minf=765 00:17:24.802 IO depths : 1=0.2%, 2=1.7%, 4=5.0%, 8=11.5%, 16=25.1%, 32=54.8%, >=64=1.8% 00:17:24.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:24.802 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:17:24.802 issued rwts: total=0,211156,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:24.802 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:24.802 00:17:24.802 Run status group 0 (all jobs): 00:17:24.802 WRITE: bw=165MiB/s (173MB/s), 165MiB/s-165MiB/s (173MB/s-173MB/s), io=825MiB (865MB), run=5001-5001msec 00:17:24.802 ----------------------------------------------------- 00:17:24.802 Suppressions used: 00:17:24.802 count bytes template 00:17:24.802 1 11 /usr/src/fio/parse.c 00:17:24.802 1 8 libtcmalloc_minimal.so 00:17:24.802 1 904 libcrypto.so 00:17:24.802 ----------------------------------------------------- 00:17:24.802 00:17:24.802 00:17:24.802 real 0m13.518s 00:17:24.802 user 0m5.358s 00:17:24.802 sys 0m5.818s 00:17:24.802 06:43:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:24.802 06:43:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:24.802 ************************************ 00:17:24.802 END TEST xnvme_fio_plugin 00:17:24.802 ************************************ 00:17:24.802 06:43:37 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:17:24.802 06:43:37 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:17:24.802 06:43:37 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:17:24.802 06:43:37 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:17:24.802 06:43:37 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:17:24.802 06:43:37 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:17:24.802 06:43:37 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:17:24.802 06:43:37 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:17:24.802 06:43:37 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:17:24.802 06:43:37 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:24.802 06:43:37 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:24.802 06:43:37 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:24.802 ************************************ 00:17:24.802 START TEST xnvme_rpc 00:17:24.802 ************************************ 00:17:24.802 06:43:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:17:24.802 06:43:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:17:24.802 06:43:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:17:24.802 06:43:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:17:24.802 06:43:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:17:24.802 06:43:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70137 00:17:24.802 06:43:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70137 00:17:24.802 06:43:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70137 ']' 00:17:24.802 06:43:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:24.802 06:43:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:24.802 06:43:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:24.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:24.802 06:43:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:24.802 06:43:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:24.802 06:43:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.060 [2024-12-06 06:43:37.561064] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:17:25.060 [2024-12-06 06:43:37.561186] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70137 ] 00:17:25.060 [2024-12-06 06:43:37.716846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.319 [2024-12-06 06:43:37.815186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:25.886 06:43:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:25.886 06:43:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:17:25.886 06:43:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:17:25.886 06:43:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.886 06:43:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.886 xnvme_bdev 00:17:25.886 06:43:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.886 06:43:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:17:25.886 06:43:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:17:25.886 06:43:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:25.886 06:43:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.886 06:43:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.886 06:43:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.886 06:43:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:17:25.886 06:43:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:17:25.886 06:43:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:25.886 06:43:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:17:25.886 06:43:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.886 06:43:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.886 06:43:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.886 06:43:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:17:25.886 06:43:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:17:25.886 06:43:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:25.886 06:43:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.886 06:43:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.886 06:43:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:17:25.886 06:43:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.886 06:43:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:17:25.886 06:43:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:17:25.886 06:43:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:25.886 06:43:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.886 06:43:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.886 06:43:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:17:25.886 06:43:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.886 06:43:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:17:25.886 06:43:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:17:25.886 06:43:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.886 06:43:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.886 06:43:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.886 06:43:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70137 00:17:25.886 06:43:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70137 ']' 00:17:25.886 06:43:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70137 00:17:25.886 06:43:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:17:25.886 06:43:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:25.886 06:43:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70137 00:17:25.886 06:43:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:25.886 06:43:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:25.886 killing process with pid 70137 00:17:25.886 06:43:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70137' 00:17:25.886 06:43:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70137 00:17:25.886 06:43:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70137 00:17:27.788 00:17:27.788 real 0m2.603s 00:17:27.788 user 0m2.720s 00:17:27.788 sys 0m0.347s 00:17:27.788 06:43:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:27.788 06:43:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.789 ************************************ 00:17:27.789 END TEST xnvme_rpc 00:17:27.789 ************************************ 00:17:27.789 06:43:40 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:17:27.789 06:43:40 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:27.789 06:43:40 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:27.789 06:43:40 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:27.789 ************************************ 00:17:27.789 START TEST xnvme_bdevperf 00:17:27.789 ************************************ 00:17:27.789 06:43:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:17:27.789 06:43:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:17:27.789 06:43:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:17:27.789 06:43:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:27.789 06:43:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:17:27.789 06:43:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:27.789 06:43:40 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:27.789 06:43:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:27.789 { 00:17:27.789 "subsystems": [ 00:17:27.789 { 00:17:27.789 "subsystem": "bdev", 00:17:27.789 "config": [ 00:17:27.789 { 00:17:27.789 "params": { 00:17:27.789 "io_mechanism": "io_uring", 00:17:27.789 "conserve_cpu": false, 00:17:27.789 "filename": "/dev/nvme0n1", 00:17:27.789 "name": "xnvme_bdev" 00:17:27.789 }, 00:17:27.789 "method": "bdev_xnvme_create" 00:17:27.789 }, 00:17:27.789 { 00:17:27.789 "method": "bdev_wait_for_examine" 00:17:27.789 } 00:17:27.789 ] 00:17:27.789 } 00:17:27.789 ] 00:17:27.789 } 00:17:27.789 [2024-12-06 06:43:40.213329] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:17:27.789 [2024-12-06 06:43:40.213508] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70207 ] 00:17:27.789 [2024-12-06 06:43:40.377954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.789 [2024-12-06 06:43:40.483659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:28.050 Running I/O for 5 seconds... 00:17:30.360 63380.00 IOPS, 247.58 MiB/s [2024-12-06T06:43:44.038Z] 63452.50 IOPS, 247.86 MiB/s [2024-12-06T06:43:45.027Z] 63098.67 IOPS, 246.48 MiB/s [2024-12-06T06:43:45.985Z] 63219.00 IOPS, 246.95 MiB/s 00:17:33.244 Latency(us) 00:17:33.244 [2024-12-06T06:43:45.985Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:33.244 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:17:33.244 xnvme_bdev : 5.00 63608.46 248.47 0.00 0.00 1002.12 371.79 9225.45 00:17:33.244 [2024-12-06T06:43:45.985Z] =================================================================================================================== 00:17:33.244 [2024-12-06T06:43:45.985Z] Total : 63608.46 248.47 0.00 0.00 1002.12 371.79 9225.45 00:17:33.811 06:43:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:33.811 06:43:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:17:33.811 06:43:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:33.811 06:43:46 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:33.811 06:43:46 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:33.811 { 00:17:33.811 "subsystems": [ 00:17:33.811 { 00:17:33.811 "subsystem": "bdev", 00:17:33.811 "config": [ 00:17:33.811 { 00:17:33.811 "params": { 00:17:33.811 "io_mechanism": "io_uring", 00:17:33.811 "conserve_cpu": false, 00:17:33.811 "filename": "/dev/nvme0n1", 00:17:33.811 "name": "xnvme_bdev" 00:17:33.811 }, 00:17:33.811 "method": "bdev_xnvme_create" 00:17:33.811 }, 00:17:33.811 { 00:17:33.811 "method": "bdev_wait_for_examine" 00:17:33.811 } 00:17:33.811 ] 00:17:33.811 } 00:17:33.811 ] 00:17:33.811 } 00:17:33.811 [2024-12-06 06:43:46.526824] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:17:33.811 [2024-12-06 06:43:46.526935] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70282 ] 00:17:34.069 [2024-12-06 06:43:46.685164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:34.069 [2024-12-06 06:43:46.782008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:34.325 Running I/O for 5 seconds... 00:17:36.643 59616.00 IOPS, 232.88 MiB/s [2024-12-06T06:43:50.316Z] 59952.00 IOPS, 234.19 MiB/s [2024-12-06T06:43:51.247Z] 60106.67 IOPS, 234.79 MiB/s [2024-12-06T06:43:52.180Z] 59672.00 IOPS, 233.09 MiB/s [2024-12-06T06:43:52.180Z] 59654.40 IOPS, 233.03 MiB/s 00:17:39.439 Latency(us) 00:17:39.439 [2024-12-06T06:43:52.180Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.439 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:17:39.439 xnvme_bdev : 5.00 59608.56 232.85 0.00 0.00 1069.10 762.49 3327.21 00:17:39.439 [2024-12-06T06:43:52.180Z] =================================================================================================================== 00:17:39.439 [2024-12-06T06:43:52.180Z] Total : 59608.56 232.85 0.00 0.00 1069.10 762.49 3327.21 00:17:40.381 00:17:40.381 real 0m12.625s 00:17:40.381 user 0m6.514s 00:17:40.381 sys 0m5.899s 00:17:40.381 06:43:52 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:40.381 06:43:52 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:40.381 ************************************ 00:17:40.381 END TEST xnvme_bdevperf 00:17:40.381 ************************************ 00:17:40.381 06:43:52 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:17:40.382 06:43:52 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:40.382 06:43:52 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:40.382 06:43:52 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:40.382 ************************************ 00:17:40.382 START TEST xnvme_fio_plugin 00:17:40.382 ************************************ 00:17:40.382 06:43:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:17:40.382 06:43:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:17:40.382 06:43:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:17:40.382 06:43:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:40.382 06:43:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:40.382 06:43:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:40.382 06:43:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:40.382 06:43:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:40.382 06:43:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:40.382 06:43:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:40.382 06:43:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:40.382 06:43:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:40.382 06:43:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:40.382 06:43:52 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:40.382 06:43:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:40.382 06:43:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:40.382 06:43:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:40.382 06:43:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:40.382 06:43:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:40.382 06:43:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:40.382 06:43:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:40.382 06:43:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:40.382 06:43:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:40.382 06:43:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:40.382 { 00:17:40.382 "subsystems": [ 00:17:40.382 { 00:17:40.382 "subsystem": "bdev", 00:17:40.382 "config": [ 00:17:40.382 { 00:17:40.382 "params": { 00:17:40.382 "io_mechanism": "io_uring", 00:17:40.382 "conserve_cpu": false, 00:17:40.382 "filename": "/dev/nvme0n1", 00:17:40.382 "name": "xnvme_bdev" 00:17:40.382 }, 00:17:40.382 "method": "bdev_xnvme_create" 00:17:40.382 }, 00:17:40.382 { 00:17:40.382 "method": "bdev_wait_for_examine" 00:17:40.382 } 00:17:40.382 ] 00:17:40.382 } 00:17:40.382 ] 00:17:40.382 } 00:17:40.382 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:40.382 fio-3.35 00:17:40.382 Starting 1 thread 00:17:46.970 00:17:46.970 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70396: Fri Dec 6 06:43:58 2024 00:17:46.970 read: IOPS=63.3k, BW=247MiB/s (259MB/s)(1236MiB/5002msec) 00:17:46.970 slat (usec): min=2, max=429, avg= 3.69, stdev= 1.63 00:17:46.970 clat (usec): min=142, max=8890, avg=868.79, stdev=182.67 00:17:46.970 lat (usec): min=146, max=8893, avg=872.48, stdev=183.11 00:17:46.970 clat percentiles (usec): 00:17:46.970 | 1.00th=[ 644], 5.00th=[ 676], 10.00th=[ 693], 20.00th=[ 734], 00:17:46.970 | 30.00th=[ 766], 40.00th=[ 799], 50.00th=[ 832], 60.00th=[ 865], 00:17:46.970 | 70.00th=[ 898], 80.00th=[ 979], 90.00th=[ 1106], 95.00th=[ 1205], 00:17:46.970 | 99.00th=[ 1467], 99.50th=[ 1598], 99.90th=[ 1975], 99.95th=[ 2245], 00:17:46.970 | 99.99th=[ 2835] 00:17:46.970 bw ( KiB/s): min=235520, max=273408, per=100.00%, avg=253708.44, stdev=10840.03, samples=9 00:17:46.970 iops : min=58880, max=68352, avg=63427.11, stdev=2710.01, samples=9 00:17:46.970 lat (usec) : 250=0.01%, 500=0.05%, 750=25.38%, 1000=56.48% 00:17:46.970 lat (msec) : 2=18.00%, 4=0.09%, 10=0.01% 00:17:46.970 cpu : usr=42.81%, sys=56.39%, ctx=15, majf=0, minf=762 00:17:46.970 IO depths : 1=1.5%, 2=3.0%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.2%, >=64=1.6% 00:17:46.970 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:46.970 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:17:46.970 issued rwts: total=316403,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:46.970 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:46.970 00:17:46.970 Run status group 0 (all jobs): 00:17:46.970 READ: bw=247MiB/s (259MB/s), 247MiB/s-247MiB/s (259MB/s-259MB/s), io=1236MiB (1296MB), run=5002-5002msec 00:17:46.970 ----------------------------------------------------- 00:17:46.970 Suppressions used: 00:17:46.970 count bytes template 00:17:46.970 1 11 /usr/src/fio/parse.c 00:17:46.970 1 8 libtcmalloc_minimal.so 00:17:46.970 1 904 libcrypto.so 00:17:46.970 ----------------------------------------------------- 00:17:46.970 00:17:46.970 06:43:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:46.970 06:43:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:46.970 06:43:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:46.970 06:43:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:46.970 06:43:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:46.970 06:43:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:46.970 06:43:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:46.970 06:43:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:46.970 06:43:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:46.971 06:43:59 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:46.971 06:43:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:46.971 06:43:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:46.971 06:43:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:46.971 06:43:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:46.971 06:43:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:46.971 06:43:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:46.971 06:43:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:46.971 06:43:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:46.971 06:43:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:46.971 06:43:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:46.971 06:43:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:46.971 { 00:17:46.971 "subsystems": [ 00:17:46.971 { 00:17:46.971 "subsystem": "bdev", 00:17:46.971 "config": [ 00:17:46.971 { 00:17:46.971 "params": { 00:17:46.971 "io_mechanism": "io_uring", 00:17:46.971 "conserve_cpu": false, 00:17:46.971 "filename": "/dev/nvme0n1", 00:17:46.971 "name": "xnvme_bdev" 00:17:46.971 }, 00:17:46.971 "method": "bdev_xnvme_create" 00:17:46.971 }, 00:17:46.971 { 00:17:46.971 "method": "bdev_wait_for_examine" 00:17:46.971 } 00:17:46.971 ] 00:17:46.971 } 00:17:46.971 ] 00:17:46.971 } 00:17:46.971 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:46.971 fio-3.35 00:17:46.971 Starting 1 thread 00:17:53.519 00:17:53.519 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70489: Fri Dec 6 06:44:05 2024 00:17:53.519 write: IOPS=62.5k, BW=244MiB/s (256MB/s)(1220MiB/5001msec); 0 zone resets 00:17:53.519 slat (usec): min=2, max=127, avg= 3.78, stdev= 1.56 00:17:53.519 clat (usec): min=252, max=2906, avg=878.41, stdev=180.15 00:17:53.519 lat (usec): min=256, max=2946, avg=882.19, stdev=180.61 00:17:53.519 clat percentiles (usec): 00:17:53.519 | 1.00th=[ 652], 5.00th=[ 676], 10.00th=[ 701], 20.00th=[ 734], 00:17:53.519 | 30.00th=[ 775], 40.00th=[ 807], 50.00th=[ 840], 60.00th=[ 873], 00:17:53.519 | 70.00th=[ 914], 80.00th=[ 996], 90.00th=[ 1106], 95.00th=[ 1221], 00:17:53.519 | 99.00th=[ 1500], 99.50th=[ 1598], 99.90th=[ 1926], 99.95th=[ 2114], 00:17:53.519 | 99.99th=[ 2606] 00:17:53.519 bw ( KiB/s): min=235016, max=264192, per=100.00%, avg=250825.33, stdev=10626.06, samples=9 00:17:53.519 iops : min=58754, max=66048, avg=62706.33, stdev=2656.51, samples=9 00:17:53.519 lat (usec) : 500=0.01%, 750=23.78%, 1000=56.63% 00:17:53.519 lat (msec) : 2=19.51%, 4=0.07% 00:17:53.519 cpu : usr=43.68%, sys=55.38%, ctx=28, majf=0, minf=763 00:17:53.519 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:17:53.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:53.519 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.0%, 64=1.5%, >=64=0.0% 00:17:53.519 issued rwts: total=0,312317,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:53.519 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:53.519 00:17:53.519 Run status group 0 (all jobs): 00:17:53.519 WRITE: bw=244MiB/s (256MB/s), 244MiB/s-244MiB/s (256MB/s-256MB/s), io=1220MiB (1279MB), run=5001-5001msec 00:17:53.519 ----------------------------------------------------- 00:17:53.519 Suppressions used: 00:17:53.519 count bytes template 00:17:53.519 1 11 /usr/src/fio/parse.c 00:17:53.519 1 8 libtcmalloc_minimal.so 00:17:53.519 1 904 libcrypto.so 00:17:53.519 ----------------------------------------------------- 00:17:53.519 00:17:53.519 00:17:53.519 real 0m13.445s 00:17:53.519 user 0m6.946s 00:17:53.519 sys 0m6.077s 00:17:53.519 06:44:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:53.519 06:44:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:53.519 ************************************ 00:17:53.519 END TEST xnvme_fio_plugin 00:17:53.519 ************************************ 00:17:53.777 06:44:06 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:17:53.777 06:44:06 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:17:53.777 06:44:06 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:17:53.777 06:44:06 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:17:53.777 06:44:06 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:53.777 06:44:06 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:53.777 06:44:06 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:53.777 ************************************ 00:17:53.777 START TEST xnvme_rpc 00:17:53.777 ************************************ 00:17:53.777 06:44:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:17:53.777 06:44:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:17:53.777 06:44:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:17:53.777 06:44:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:17:53.777 06:44:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:17:53.777 06:44:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70574 00:17:53.777 06:44:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70574 00:17:53.777 06:44:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:53.778 06:44:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70574 ']' 00:17:53.778 06:44:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:53.778 06:44:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:53.778 06:44:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:53.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:53.778 06:44:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:53.778 06:44:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:53.778 [2024-12-06 06:44:06.353430] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:17:53.778 [2024-12-06 06:44:06.353564] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70574 ] 00:17:53.778 [2024-12-06 06:44:06.514073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.034 [2024-12-06 06:44:06.615177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:54.598 06:44:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:54.598 06:44:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:17:54.598 06:44:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:17:54.598 06:44:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.598 06:44:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:54.598 xnvme_bdev 00:17:54.598 06:44:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.598 06:44:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:17:54.598 06:44:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:54.598 06:44:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.598 06:44:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:54.598 06:44:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:17:54.598 06:44:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.598 06:44:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:17:54.598 06:44:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:17:54.598 06:44:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:54.598 06:44:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.598 06:44:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:54.598 06:44:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:17:54.598 06:44:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.598 06:44:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:17:54.598 06:44:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:17:54.598 06:44:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:54.598 06:44:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:17:54.598 06:44:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.598 06:44:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:54.598 06:44:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.598 06:44:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:17:54.598 06:44:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:17:54.598 06:44:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:17:54.598 06:44:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:54.598 06:44:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.598 06:44:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:54.598 06:44:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.856 06:44:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:17:54.856 06:44:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:17:54.856 06:44:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.856 06:44:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:54.856 06:44:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.856 06:44:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70574 00:17:54.856 06:44:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70574 ']' 00:17:54.856 06:44:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70574 00:17:54.856 06:44:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:17:54.856 06:44:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:54.856 06:44:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70574 00:17:54.856 06:44:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:54.856 06:44:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:54.856 killing process with pid 70574 00:17:54.856 06:44:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70574' 00:17:54.856 06:44:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70574 00:17:54.856 06:44:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70574 00:17:56.230 00:17:56.230 real 0m2.580s 00:17:56.230 user 0m2.678s 00:17:56.230 sys 0m0.337s 00:17:56.230 06:44:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:56.230 06:44:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:56.230 ************************************ 00:17:56.230 END TEST xnvme_rpc 00:17:56.230 ************************************ 00:17:56.230 06:44:08 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:17:56.230 06:44:08 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:56.230 06:44:08 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:56.230 06:44:08 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:56.230 ************************************ 00:17:56.230 START TEST xnvme_bdevperf 00:17:56.230 ************************************ 00:17:56.230 06:44:08 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:17:56.230 06:44:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:17:56.230 06:44:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:17:56.230 06:44:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:56.230 06:44:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:17:56.230 06:44:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:56.230 06:44:08 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:56.230 06:44:08 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:56.230 { 00:17:56.230 "subsystems": [ 00:17:56.230 { 00:17:56.230 "subsystem": "bdev", 00:17:56.230 "config": [ 00:17:56.230 { 00:17:56.230 "params": { 00:17:56.230 "io_mechanism": "io_uring", 00:17:56.230 "conserve_cpu": true, 00:17:56.230 "filename": "/dev/nvme0n1", 00:17:56.230 "name": "xnvme_bdev" 00:17:56.230 }, 00:17:56.230 "method": "bdev_xnvme_create" 00:17:56.230 }, 00:17:56.230 { 00:17:56.230 "method": "bdev_wait_for_examine" 00:17:56.230 } 00:17:56.230 ] 00:17:56.230 } 00:17:56.230 ] 00:17:56.230 } 00:17:56.231 [2024-12-06 06:44:08.960701] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:17:56.231 [2024-12-06 06:44:08.960821] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70639 ] 00:17:56.489 [2024-12-06 06:44:09.119355] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.489 [2024-12-06 06:44:09.217249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:56.747 Running I/O for 5 seconds... 00:17:59.053 58714.00 IOPS, 229.35 MiB/s [2024-12-06T06:44:12.729Z] 61349.00 IOPS, 239.64 MiB/s [2024-12-06T06:44:13.663Z] 61312.00 IOPS, 239.50 MiB/s [2024-12-06T06:44:14.640Z] 62012.25 IOPS, 242.24 MiB/s 00:18:01.899 Latency(us) 00:18:01.899 [2024-12-06T06:44:14.640Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:01.899 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:18:01.899 xnvme_bdev : 5.00 62723.17 245.01 0.00 0.00 1016.29 406.45 9628.75 00:18:01.899 [2024-12-06T06:44:14.640Z] =================================================================================================================== 00:18:01.899 [2024-12-06T06:44:14.640Z] Total : 62723.17 245.01 0.00 0.00 1016.29 406.45 9628.75 00:18:02.464 06:44:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:02.464 06:44:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:18:02.464 06:44:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:02.464 06:44:15 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:02.464 06:44:15 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:02.464 { 00:18:02.464 "subsystems": [ 00:18:02.464 { 00:18:02.464 "subsystem": "bdev", 00:18:02.464 "config": [ 00:18:02.464 { 00:18:02.464 "params": { 00:18:02.464 "io_mechanism": "io_uring", 00:18:02.464 "conserve_cpu": true, 00:18:02.464 "filename": "/dev/nvme0n1", 00:18:02.464 "name": "xnvme_bdev" 00:18:02.464 }, 00:18:02.464 "method": "bdev_xnvme_create" 00:18:02.464 }, 00:18:02.464 { 00:18:02.464 "method": "bdev_wait_for_examine" 00:18:02.464 } 00:18:02.464 ] 00:18:02.464 } 00:18:02.464 ] 00:18:02.464 } 00:18:02.464 [2024-12-06 06:44:15.083121] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:18:02.464 [2024-12-06 06:44:15.083239] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70714 ] 00:18:02.723 [2024-12-06 06:44:15.242973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.723 [2024-12-06 06:44:15.341221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:02.982 Running I/O for 5 seconds... 00:18:04.848 62144.00 IOPS, 242.75 MiB/s [2024-12-06T06:44:18.961Z] 60992.00 IOPS, 238.25 MiB/s [2024-12-06T06:44:19.892Z] 60778.67 IOPS, 237.42 MiB/s [2024-12-06T06:44:20.825Z] 60832.00 IOPS, 237.62 MiB/s 00:18:08.084 Latency(us) 00:18:08.084 [2024-12-06T06:44:20.825Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:08.084 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:18:08.084 xnvme_bdev : 5.00 60724.98 237.21 0.00 0.00 1049.60 608.10 3402.83 00:18:08.084 [2024-12-06T06:44:20.825Z] =================================================================================================================== 00:18:08.084 [2024-12-06T06:44:20.825Z] Total : 60724.98 237.21 0.00 0.00 1049.60 608.10 3402.83 00:18:08.650 ************************************ 00:18:08.650 END TEST xnvme_bdevperf 00:18:08.650 ************************************ 00:18:08.650 00:18:08.650 real 0m12.396s 00:18:08.650 user 0m6.057s 00:18:08.650 sys 0m5.873s 00:18:08.650 06:44:21 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:08.650 06:44:21 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:08.650 06:44:21 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:18:08.650 06:44:21 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:08.650 06:44:21 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:08.650 06:44:21 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:08.650 ************************************ 00:18:08.650 START TEST xnvme_fio_plugin 00:18:08.650 ************************************ 00:18:08.650 06:44:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:18:08.650 06:44:21 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:18:08.650 06:44:21 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:18:08.650 06:44:21 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:08.650 06:44:21 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:08.650 06:44:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:08.650 06:44:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:08.650 06:44:21 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:18:08.650 06:44:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:08.650 06:44:21 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:18:08.650 06:44:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:08.650 06:44:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:08.650 06:44:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:08.650 06:44:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:18:08.650 06:44:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:08.650 06:44:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:08.650 06:44:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:08.650 06:44:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:18:08.650 06:44:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:08.650 06:44:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:08.650 06:44:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:08.650 06:44:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:18:08.650 06:44:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:08.650 06:44:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:08.650 { 00:18:08.650 "subsystems": [ 00:18:08.650 { 00:18:08.650 "subsystem": "bdev", 00:18:08.650 "config": [ 00:18:08.650 { 00:18:08.650 "params": { 00:18:08.650 "io_mechanism": "io_uring", 00:18:08.650 "conserve_cpu": true, 00:18:08.650 "filename": "/dev/nvme0n1", 00:18:08.650 "name": "xnvme_bdev" 00:18:08.650 }, 00:18:08.650 "method": "bdev_xnvme_create" 00:18:08.650 }, 00:18:08.650 { 00:18:08.650 "method": "bdev_wait_for_examine" 00:18:08.650 } 00:18:08.650 ] 00:18:08.650 } 00:18:08.650 ] 00:18:08.650 } 00:18:08.907 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:18:08.907 fio-3.35 00:18:08.907 Starting 1 thread 00:18:15.511 00:18:15.511 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70832: Fri Dec 6 06:44:27 2024 00:18:15.511 read: IOPS=53.2k, BW=208MiB/s (218MB/s)(1040MiB/5001msec) 00:18:15.511 slat (usec): min=2, max=195, avg= 4.01, stdev= 1.94 00:18:15.511 clat (usec): min=595, max=4310, avg=1043.06, stdev=311.11 00:18:15.511 lat (usec): min=598, max=4314, avg=1047.07, stdev=311.77 00:18:15.511 clat percentiles (usec): 00:18:15.511 | 1.00th=[ 685], 5.00th=[ 725], 10.00th=[ 758], 20.00th=[ 807], 00:18:15.511 | 30.00th=[ 848], 40.00th=[ 898], 50.00th=[ 947], 60.00th=[ 1012], 00:18:15.511 | 70.00th=[ 1106], 80.00th=[ 1237], 90.00th=[ 1500], 95.00th=[ 1696], 00:18:15.511 | 99.00th=[ 2057], 99.50th=[ 2212], 99.90th=[ 2573], 99.95th=[ 2900], 00:18:15.511 | 99.99th=[ 4047] 00:18:15.511 bw ( KiB/s): min=149504, max=252416, per=98.98%, avg=210828.44, stdev=33187.49, samples=9 00:18:15.511 iops : min=37376, max=63104, avg=52707.11, stdev=8296.87, samples=9 00:18:15.511 lat (usec) : 750=9.21%, 1000=48.70% 00:18:15.511 lat (msec) : 2=40.75%, 4=1.32%, 10=0.01% 00:18:15.511 cpu : usr=45.44%, sys=50.90%, ctx=15, majf=0, minf=762 00:18:15.511 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:18:15.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:15.511 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:18:15.511 issued rwts: total=266302,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:15.511 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:15.511 00:18:15.511 Run status group 0 (all jobs): 00:18:15.511 READ: bw=208MiB/s (218MB/s), 208MiB/s-208MiB/s (218MB/s-218MB/s), io=1040MiB (1091MB), run=5001-5001msec 00:18:15.511 ----------------------------------------------------- 00:18:15.511 Suppressions used: 00:18:15.511 count bytes template 00:18:15.511 1 11 /usr/src/fio/parse.c 00:18:15.511 1 8 libtcmalloc_minimal.so 00:18:15.511 1 904 libcrypto.so 00:18:15.511 ----------------------------------------------------- 00:18:15.511 00:18:15.511 06:44:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:15.512 06:44:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:15.512 06:44:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:15.512 06:44:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:15.512 06:44:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:15.512 06:44:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:18:15.512 06:44:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:15.512 06:44:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:15.512 06:44:28 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:18:15.512 06:44:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:18:15.512 06:44:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:15.512 06:44:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:15.512 06:44:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:15.512 06:44:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:15.512 06:44:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:18:15.512 06:44:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:15.512 06:44:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:15.512 06:44:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:15.512 06:44:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:18:15.512 06:44:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:15.512 06:44:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:15.512 { 00:18:15.512 "subsystems": [ 00:18:15.512 { 00:18:15.512 "subsystem": "bdev", 00:18:15.512 "config": [ 00:18:15.512 { 00:18:15.512 "params": { 00:18:15.512 "io_mechanism": "io_uring", 00:18:15.512 "conserve_cpu": true, 00:18:15.512 "filename": "/dev/nvme0n1", 00:18:15.512 "name": "xnvme_bdev" 00:18:15.512 }, 00:18:15.512 "method": "bdev_xnvme_create" 00:18:15.512 }, 00:18:15.512 { 00:18:15.512 "method": "bdev_wait_for_examine" 00:18:15.512 } 00:18:15.512 ] 00:18:15.512 } 00:18:15.512 ] 00:18:15.512 } 00:18:15.512 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:18:15.512 fio-3.35 00:18:15.512 Starting 1 thread 00:18:22.125 00:18:22.125 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70925: Fri Dec 6 06:44:33 2024 00:18:22.125 write: IOPS=63.4k, BW=248MiB/s (260MB/s)(1239MiB/5001msec); 0 zone resets 00:18:22.125 slat (nsec): min=2900, max=93031, avg=3658.89, stdev=1167.20 00:18:22.125 clat (usec): min=328, max=95293, avg=869.02, stdev=1330.23 00:18:22.125 lat (usec): min=331, max=95296, avg=872.68, stdev=1330.26 00:18:22.125 clat percentiles (usec): 00:18:22.125 | 1.00th=[ 652], 5.00th=[ 685], 10.00th=[ 701], 20.00th=[ 734], 00:18:22.125 | 30.00th=[ 766], 40.00th=[ 791], 50.00th=[ 824], 60.00th=[ 857], 00:18:22.125 | 70.00th=[ 881], 80.00th=[ 922], 90.00th=[ 1037], 95.00th=[ 1139], 00:18:22.125 | 99.00th=[ 1401], 99.50th=[ 1500], 99.90th=[ 1795], 99.95th=[ 2057], 00:18:22.125 | 99.99th=[93848] 00:18:22.125 bw ( KiB/s): min=217480, max=278784, per=100.00%, avg=254479.11, stdev=17487.86, samples=9 00:18:22.125 iops : min=54370, max=69696, avg=63619.78, stdev=4371.97, samples=9 00:18:22.125 lat (usec) : 500=0.01%, 750=24.96%, 1000=62.97% 00:18:22.125 lat (msec) : 2=12.01%, 4=0.02%, 10=0.02%, 100=0.02% 00:18:22.125 cpu : usr=41.32%, sys=55.78%, ctx=10, majf=0, minf=763 00:18:22.125 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:18:22.125 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:22.126 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:18:22.126 issued rwts: total=0,317265,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:22.126 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:22.126 00:18:22.126 Run status group 0 (all jobs): 00:18:22.126 WRITE: bw=248MiB/s (260MB/s), 248MiB/s-248MiB/s (260MB/s-260MB/s), io=1239MiB (1300MB), run=5001-5001msec 00:18:22.126 ----------------------------------------------------- 00:18:22.126 Suppressions used: 00:18:22.126 count bytes template 00:18:22.126 1 11 /usr/src/fio/parse.c 00:18:22.126 1 8 libtcmalloc_minimal.so 00:18:22.126 1 904 libcrypto.so 00:18:22.126 ----------------------------------------------------- 00:18:22.126 00:18:22.126 00:18:22.126 real 0m13.253s 00:18:22.126 user 0m6.797s 00:18:22.126 sys 0m5.802s 00:18:22.126 06:44:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:22.126 ************************************ 00:18:22.126 END TEST xnvme_fio_plugin 00:18:22.126 ************************************ 00:18:22.126 06:44:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:22.126 06:44:34 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:18:22.126 06:44:34 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:18:22.126 06:44:34 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:18:22.126 06:44:34 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:18:22.126 06:44:34 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:18:22.126 06:44:34 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:18:22.126 06:44:34 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:18:22.126 06:44:34 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:18:22.126 06:44:34 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:18:22.126 06:44:34 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:22.126 06:44:34 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:22.126 06:44:34 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:22.126 ************************************ 00:18:22.126 START TEST xnvme_rpc 00:18:22.126 ************************************ 00:18:22.126 06:44:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:18:22.126 06:44:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:18:22.126 06:44:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:18:22.126 06:44:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:18:22.126 06:44:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:18:22.126 06:44:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71000 00:18:22.126 06:44:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71000 00:18:22.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:22.126 06:44:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71000 ']' 00:18:22.126 06:44:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:22.126 06:44:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:22.126 06:44:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:22.126 06:44:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:22.126 06:44:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:22.126 06:44:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:22.126 [2024-12-06 06:44:34.704285] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:18:22.126 [2024-12-06 06:44:34.704404] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71000 ] 00:18:22.384 [2024-12-06 06:44:34.865820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.384 [2024-12-06 06:44:34.964793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:22.949 06:44:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:22.949 06:44:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:18:22.949 06:44:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:18:22.949 06:44:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.949 06:44:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:22.949 xnvme_bdev 00:18:22.949 06:44:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.949 06:44:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:18:22.949 06:44:35 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:22.949 06:44:35 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:18:22.949 06:44:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.949 06:44:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:22.949 06:44:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.949 06:44:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:18:22.949 06:44:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:18:22.949 06:44:35 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:22.949 06:44:35 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:18:22.949 06:44:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.949 06:44:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:22.949 06:44:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.949 06:44:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:18:22.949 06:44:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:18:22.949 06:44:35 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:18:22.949 06:44:35 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:22.949 06:44:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.949 06:44:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:22.949 06:44:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.949 06:44:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:18:22.949 06:44:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:18:22.949 06:44:35 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:18:22.949 06:44:35 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:22.949 06:44:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.949 06:44:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:22.949 06:44:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.949 06:44:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:18:22.949 06:44:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:18:22.949 06:44:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.949 06:44:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:23.207 06:44:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.207 06:44:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71000 00:18:23.207 06:44:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71000 ']' 00:18:23.207 06:44:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71000 00:18:23.207 06:44:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:18:23.207 06:44:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:23.207 06:44:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71000 00:18:23.207 killing process with pid 71000 00:18:23.207 06:44:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:23.207 06:44:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:23.207 06:44:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71000' 00:18:23.207 06:44:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71000 00:18:23.207 06:44:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71000 00:18:24.579 ************************************ 00:18:24.579 END TEST xnvme_rpc 00:18:24.579 ************************************ 00:18:24.579 00:18:24.579 real 0m2.592s 00:18:24.579 user 0m2.694s 00:18:24.579 sys 0m0.360s 00:18:24.579 06:44:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:24.579 06:44:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:24.579 06:44:37 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:18:24.579 06:44:37 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:24.579 06:44:37 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:24.579 06:44:37 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:24.579 ************************************ 00:18:24.579 START TEST xnvme_bdevperf 00:18:24.579 ************************************ 00:18:24.579 06:44:37 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:18:24.579 06:44:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:18:24.579 06:44:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:18:24.579 06:44:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:24.579 06:44:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:18:24.579 06:44:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:24.579 06:44:37 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:24.579 06:44:37 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:24.579 { 00:18:24.579 "subsystems": [ 00:18:24.579 { 00:18:24.579 "subsystem": "bdev", 00:18:24.579 "config": [ 00:18:24.579 { 00:18:24.579 "params": { 00:18:24.579 "io_mechanism": "io_uring_cmd", 00:18:24.579 "conserve_cpu": false, 00:18:24.579 "filename": "/dev/ng0n1", 00:18:24.579 "name": "xnvme_bdev" 00:18:24.579 }, 00:18:24.579 "method": "bdev_xnvme_create" 00:18:24.579 }, 00:18:24.579 { 00:18:24.579 "method": "bdev_wait_for_examine" 00:18:24.579 } 00:18:24.579 ] 00:18:24.579 } 00:18:24.579 ] 00:18:24.579 } 00:18:24.836 [2024-12-06 06:44:37.319232] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:18:24.837 [2024-12-06 06:44:37.319499] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71073 ] 00:18:24.837 [2024-12-06 06:44:37.478805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.094 [2024-12-06 06:44:37.576631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:25.094 Running I/O for 5 seconds... 00:18:27.397 66134.00 IOPS, 258.34 MiB/s [2024-12-06T06:44:41.072Z] 64670.00 IOPS, 252.62 MiB/s [2024-12-06T06:44:42.001Z] 64855.33 IOPS, 253.34 MiB/s [2024-12-06T06:44:42.932Z] 65279.50 IOPS, 255.00 MiB/s [2024-12-06T06:44:42.932Z] 65340.80 IOPS, 255.24 MiB/s 00:18:30.191 Latency(us) 00:18:30.191 [2024-12-06T06:44:42.932Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.191 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:18:30.191 xnvme_bdev : 5.00 65303.50 255.09 0.00 0.00 976.05 378.09 83482.78 00:18:30.191 [2024-12-06T06:44:42.932Z] =================================================================================================================== 00:18:30.191 [2024-12-06T06:44:42.932Z] Total : 65303.50 255.09 0.00 0.00 976.05 378.09 83482.78 00:18:31.122 06:44:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:31.122 06:44:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:18:31.122 06:44:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:31.122 06:44:43 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:31.122 06:44:43 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:31.122 { 00:18:31.122 "subsystems": [ 00:18:31.122 { 00:18:31.122 "subsystem": "bdev", 00:18:31.122 "config": [ 00:18:31.122 { 00:18:31.122 "params": { 00:18:31.122 "io_mechanism": "io_uring_cmd", 00:18:31.122 "conserve_cpu": false, 00:18:31.122 "filename": "/dev/ng0n1", 00:18:31.122 "name": "xnvme_bdev" 00:18:31.122 }, 00:18:31.122 "method": "bdev_xnvme_create" 00:18:31.122 }, 00:18:31.122 { 00:18:31.122 "method": "bdev_wait_for_examine" 00:18:31.122 } 00:18:31.122 ] 00:18:31.122 } 00:18:31.122 ] 00:18:31.122 } 00:18:31.122 [2024-12-06 06:44:43.605128] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:18:31.122 [2024-12-06 06:44:43.605243] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71147 ] 00:18:31.122 [2024-12-06 06:44:43.762337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.379 [2024-12-06 06:44:43.861392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:31.379 Running I/O for 5 seconds... 00:18:33.686 60608.00 IOPS, 236.75 MiB/s [2024-12-06T06:44:47.375Z] 60544.00 IOPS, 236.50 MiB/s [2024-12-06T06:44:48.307Z] 61237.67 IOPS, 239.21 MiB/s [2024-12-06T06:44:49.241Z] 61688.25 IOPS, 240.97 MiB/s 00:18:36.500 Latency(us) 00:18:36.500 [2024-12-06T06:44:49.241Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.500 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:18:36.500 xnvme_bdev : 5.00 61370.96 239.73 0.00 0.00 1038.36 620.70 3201.18 00:18:36.500 [2024-12-06T06:44:49.241Z] =================================================================================================================== 00:18:36.500 [2024-12-06T06:44:49.241Z] Total : 61370.96 239.73 0.00 0.00 1038.36 620.70 3201.18 00:18:37.433 06:44:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:37.433 06:44:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:18:37.433 06:44:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:37.433 06:44:49 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:37.433 06:44:49 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:37.433 { 00:18:37.433 "subsystems": [ 00:18:37.433 { 00:18:37.433 "subsystem": "bdev", 00:18:37.433 "config": [ 00:18:37.433 { 00:18:37.433 "params": { 00:18:37.433 "io_mechanism": "io_uring_cmd", 00:18:37.433 "conserve_cpu": false, 00:18:37.433 "filename": "/dev/ng0n1", 00:18:37.433 "name": "xnvme_bdev" 00:18:37.433 }, 00:18:37.433 "method": "bdev_xnvme_create" 00:18:37.433 }, 00:18:37.433 { 00:18:37.433 "method": "bdev_wait_for_examine" 00:18:37.433 } 00:18:37.433 ] 00:18:37.433 } 00:18:37.433 ] 00:18:37.433 } 00:18:37.433 [2024-12-06 06:44:49.888204] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:18:37.433 [2024-12-06 06:44:49.888322] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71221 ] 00:18:37.433 [2024-12-06 06:44:50.045815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.433 [2024-12-06 06:44:50.145704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:37.691 Running I/O for 5 seconds... 00:18:39.703 96320.00 IOPS, 376.25 MiB/s [2024-12-06T06:44:53.818Z] 94400.00 IOPS, 368.75 MiB/s [2024-12-06T06:44:54.751Z] 95360.00 IOPS, 372.50 MiB/s [2024-12-06T06:44:55.682Z] 95696.00 IOPS, 373.81 MiB/s [2024-12-06T06:44:55.682Z] 95488.00 IOPS, 373.00 MiB/s 00:18:42.941 Latency(us) 00:18:42.941 [2024-12-06T06:44:55.682Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:42.941 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:18:42.941 xnvme_bdev : 5.00 95438.99 372.81 0.00 0.00 667.16 466.31 2646.65 00:18:42.941 [2024-12-06T06:44:55.682Z] =================================================================================================================== 00:18:42.941 [2024-12-06T06:44:55.682Z] Total : 95438.99 372.81 0.00 0.00 667.16 466.31 2646.65 00:18:43.504 06:44:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:43.504 06:44:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:18:43.504 06:44:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:43.504 06:44:56 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:43.504 06:44:56 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:43.504 { 00:18:43.504 "subsystems": [ 00:18:43.504 { 00:18:43.504 "subsystem": "bdev", 00:18:43.504 "config": [ 00:18:43.504 { 00:18:43.504 "params": { 00:18:43.504 "io_mechanism": "io_uring_cmd", 00:18:43.504 "conserve_cpu": false, 00:18:43.504 "filename": "/dev/ng0n1", 00:18:43.504 "name": "xnvme_bdev" 00:18:43.504 }, 00:18:43.504 "method": "bdev_xnvme_create" 00:18:43.504 }, 00:18:43.504 { 00:18:43.505 "method": "bdev_wait_for_examine" 00:18:43.505 } 00:18:43.505 ] 00:18:43.505 } 00:18:43.505 ] 00:18:43.505 } 00:18:43.762 [2024-12-06 06:44:56.244550] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:18:43.762 [2024-12-06 06:44:56.244668] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71298 ] 00:18:43.762 [2024-12-06 06:44:56.399401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.762 [2024-12-06 06:44:56.497525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:44.019 Running I/O for 5 seconds... 00:18:46.317 2695.00 IOPS, 10.53 MiB/s [2024-12-06T06:44:59.990Z] 3872.50 IOPS, 15.13 MiB/s [2024-12-06T06:45:00.922Z] 26050.33 IOPS, 101.76 MiB/s [2024-12-06T06:45:01.859Z] 34308.25 IOPS, 134.02 MiB/s [2024-12-06T06:45:01.859Z] 32457.20 IOPS, 126.79 MiB/s 00:18:49.118 Latency(us) 00:18:49.118 [2024-12-06T06:45:01.859Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:49.118 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:18:49.118 xnvme_bdev : 5.00 32436.62 126.71 0.00 0.00 1968.78 51.20 178257.92 00:18:49.118 [2024-12-06T06:45:01.859Z] =================================================================================================================== 00:18:49.118 [2024-12-06T06:45:01.859Z] Total : 32436.62 126.71 0.00 0.00 1968.78 51.20 178257.92 00:18:50.050 00:18:50.050 real 0m25.278s 00:18:50.050 user 0m14.314s 00:18:50.050 sys 0m10.553s 00:18:50.050 06:45:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:50.050 06:45:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:50.050 ************************************ 00:18:50.050 END TEST xnvme_bdevperf 00:18:50.050 ************************************ 00:18:50.050 06:45:02 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:18:50.050 06:45:02 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:50.050 06:45:02 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:50.050 06:45:02 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:50.050 ************************************ 00:18:50.050 START TEST xnvme_fio_plugin 00:18:50.050 ************************************ 00:18:50.050 06:45:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:18:50.050 06:45:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:18:50.050 06:45:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:18:50.050 06:45:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:50.050 06:45:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:50.050 06:45:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:50.050 06:45:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:50.050 06:45:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:18:50.050 06:45:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:50.050 06:45:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:50.050 06:45:02 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:18:50.050 06:45:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:50.050 06:45:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:18:50.050 06:45:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:50.050 06:45:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:50.050 06:45:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:50.050 06:45:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:18:50.050 06:45:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:50.050 06:45:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:50.050 06:45:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:50.050 06:45:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:50.050 06:45:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:18:50.050 06:45:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:50.050 06:45:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:50.050 { 00:18:50.050 "subsystems": [ 00:18:50.050 { 00:18:50.050 "subsystem": "bdev", 00:18:50.050 "config": [ 00:18:50.050 { 00:18:50.050 "params": { 00:18:50.050 "io_mechanism": "io_uring_cmd", 00:18:50.050 "conserve_cpu": false, 00:18:50.050 "filename": "/dev/ng0n1", 00:18:50.050 "name": "xnvme_bdev" 00:18:50.050 }, 00:18:50.050 "method": "bdev_xnvme_create" 00:18:50.050 }, 00:18:50.050 { 00:18:50.050 "method": "bdev_wait_for_examine" 00:18:50.050 } 00:18:50.050 ] 00:18:50.050 } 00:18:50.050 ] 00:18:50.050 } 00:18:50.050 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:18:50.050 fio-3.35 00:18:50.050 Starting 1 thread 00:18:56.631 00:18:56.631 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71408: Fri Dec 6 06:45:08 2024 00:18:56.631 read: IOPS=62.8k, BW=245MiB/s (257MB/s)(1228MiB/5001msec) 00:18:56.631 slat (nsec): min=2887, max=89517, avg=3618.27, stdev=1240.55 00:18:56.631 clat (usec): min=561, max=2792, avg=877.86, stdev=158.23 00:18:56.631 lat (usec): min=564, max=2824, avg=881.48, stdev=158.47 00:18:56.631 clat percentiles (usec): 00:18:56.631 | 1.00th=[ 644], 5.00th=[ 676], 10.00th=[ 701], 20.00th=[ 742], 00:18:56.631 | 30.00th=[ 775], 40.00th=[ 816], 50.00th=[ 848], 60.00th=[ 889], 00:18:56.631 | 70.00th=[ 938], 80.00th=[ 1012], 90.00th=[ 1090], 95.00th=[ 1156], 00:18:56.631 | 99.00th=[ 1336], 99.50th=[ 1434], 99.90th=[ 1680], 99.95th=[ 1893], 00:18:56.631 | 99.99th=[ 2474] 00:18:56.631 bw ( KiB/s): min=241152, max=260608, per=99.79%, avg=250823.11, stdev=6125.01, samples=9 00:18:56.631 iops : min=60288, max=65152, avg=62705.78, stdev=1531.25, samples=9 00:18:56.631 lat (usec) : 750=22.27%, 1000=56.69% 00:18:56.631 lat (msec) : 2=20.99%, 4=0.04% 00:18:56.631 cpu : usr=42.80%, sys=56.40%, ctx=10, majf=0, minf=762 00:18:56.631 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:18:56.631 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.631 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:18:56.631 issued rwts: total=314240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.631 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:56.631 00:18:56.631 Run status group 0 (all jobs): 00:18:56.631 READ: bw=245MiB/s (257MB/s), 245MiB/s-245MiB/s (257MB/s-257MB/s), io=1228MiB (1287MB), run=5001-5001msec 00:18:56.631 ----------------------------------------------------- 00:18:56.631 Suppressions used: 00:18:56.631 count bytes template 00:18:56.631 1 11 /usr/src/fio/parse.c 00:18:56.631 1 8 libtcmalloc_minimal.so 00:18:56.631 1 904 libcrypto.so 00:18:56.631 ----------------------------------------------------- 00:18:56.631 00:18:56.631 06:45:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:56.631 06:45:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:56.631 06:45:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:56.631 06:45:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:18:56.631 06:45:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:56.631 06:45:09 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:18:56.631 06:45:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:56.631 06:45:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:56.631 06:45:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:56.631 06:45:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:56.631 06:45:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:18:56.631 06:45:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:56.631 06:45:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:56.631 06:45:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:56.631 06:45:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:18:56.631 06:45:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:56.631 06:45:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:56.631 06:45:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:56.631 06:45:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:18:56.631 06:45:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:56.631 06:45:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:56.631 { 00:18:56.631 "subsystems": [ 00:18:56.631 { 00:18:56.631 "subsystem": "bdev", 00:18:56.631 "config": [ 00:18:56.631 { 00:18:56.631 "params": { 00:18:56.631 "io_mechanism": "io_uring_cmd", 00:18:56.631 "conserve_cpu": false, 00:18:56.631 "filename": "/dev/ng0n1", 00:18:56.631 "name": "xnvme_bdev" 00:18:56.631 }, 00:18:56.631 "method": "bdev_xnvme_create" 00:18:56.631 }, 00:18:56.631 { 00:18:56.631 "method": "bdev_wait_for_examine" 00:18:56.631 } 00:18:56.631 ] 00:18:56.631 } 00:18:56.631 ] 00:18:56.631 } 00:18:56.893 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:18:56.893 fio-3.35 00:18:56.893 Starting 1 thread 00:19:03.491 00:19:03.491 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71500: Fri Dec 6 06:45:14 2024 00:19:03.491 write: IOPS=54.8k, BW=214MiB/s (224MB/s)(1070MiB/5001msec); 0 zone resets 00:19:03.491 slat (nsec): min=2272, max=89000, avg=4137.72, stdev=1983.79 00:19:03.491 clat (usec): min=64, max=18751, avg=1006.96, stdev=615.89 00:19:03.491 lat (usec): min=67, max=18754, avg=1011.10, stdev=616.16 00:19:03.491 clat percentiles (usec): 00:19:03.491 | 1.00th=[ 635], 5.00th=[ 685], 10.00th=[ 717], 20.00th=[ 766], 00:19:03.491 | 30.00th=[ 807], 40.00th=[ 848], 50.00th=[ 889], 60.00th=[ 947], 00:19:03.491 | 70.00th=[ 1012], 80.00th=[ 1106], 90.00th=[ 1287], 95.00th=[ 1549], 00:19:03.491 | 99.00th=[ 3556], 99.50th=[ 4752], 99.90th=[ 8455], 99.95th=[10814], 00:19:03.491 | 99.99th=[18744] 00:19:03.491 bw ( KiB/s): min=124944, max=258048, per=99.73%, avg=218595.56, stdev=39618.88, samples=9 00:19:03.491 iops : min=31236, max=64512, avg=54648.89, stdev=9904.72, samples=9 00:19:03.491 lat (usec) : 100=0.01%, 250=0.07%, 500=0.21%, 750=16.58%, 1000=52.02% 00:19:03.491 lat (msec) : 2=28.87%, 4=1.49%, 10=0.70%, 20=0.06% 00:19:03.491 cpu : usr=45.00%, sys=54.14%, ctx=12, majf=0, minf=763 00:19:03.491 IO depths : 1=1.5%, 2=3.0%, 4=6.1%, 8=12.2%, 16=24.5%, 32=50.9%, >=64=1.7% 00:19:03.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.491 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:19:03.491 issued rwts: total=0,274047,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:03.491 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:03.491 00:19:03.491 Run status group 0 (all jobs): 00:19:03.491 WRITE: bw=214MiB/s (224MB/s), 214MiB/s-214MiB/s (224MB/s-224MB/s), io=1070MiB (1122MB), run=5001-5001msec 00:19:03.491 ----------------------------------------------------- 00:19:03.491 Suppressions used: 00:19:03.491 count bytes template 00:19:03.491 1 11 /usr/src/fio/parse.c 00:19:03.491 1 8 libtcmalloc_minimal.so 00:19:03.491 1 904 libcrypto.so 00:19:03.491 ----------------------------------------------------- 00:19:03.491 00:19:03.491 00:19:03.491 real 0m13.332s 00:19:03.491 user 0m6.916s 00:19:03.491 sys 0m6.008s 00:19:03.491 06:45:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:03.491 06:45:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:03.491 ************************************ 00:19:03.491 END TEST xnvme_fio_plugin 00:19:03.491 ************************************ 00:19:03.491 06:45:15 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:19:03.491 06:45:15 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:19:03.491 06:45:15 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:19:03.491 06:45:15 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:19:03.491 06:45:15 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:03.491 06:45:15 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:03.491 06:45:15 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:03.491 ************************************ 00:19:03.491 START TEST xnvme_rpc 00:19:03.491 ************************************ 00:19:03.491 06:45:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:19:03.491 06:45:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:19:03.491 06:45:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:19:03.491 06:45:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:19:03.491 06:45:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:19:03.491 06:45:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71585 00:19:03.491 06:45:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71585 00:19:03.491 06:45:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71585 ']' 00:19:03.491 06:45:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:03.491 06:45:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:03.491 06:45:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:03.491 06:45:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:03.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:03.492 06:45:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:03.492 06:45:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:03.492 [2024-12-06 06:45:16.032784] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:19:03.492 [2024-12-06 06:45:16.032920] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71585 ] 00:19:03.492 [2024-12-06 06:45:16.195959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:03.749 [2024-12-06 06:45:16.295888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:04.315 06:45:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:04.315 06:45:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:19:04.315 06:45:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:19:04.315 06:45:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.315 06:45:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:04.315 xnvme_bdev 00:19:04.315 06:45:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.315 06:45:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:19:04.315 06:45:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:04.315 06:45:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:19:04.315 06:45:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.315 06:45:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:04.315 06:45:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.315 06:45:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:19:04.315 06:45:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:19:04.315 06:45:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:04.315 06:45:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.315 06:45:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:04.315 06:45:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:19:04.315 06:45:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.315 06:45:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:19:04.315 06:45:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:19:04.315 06:45:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:04.315 06:45:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.315 06:45:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:04.315 06:45:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:19:04.315 06:45:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.315 06:45:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:19:04.315 06:45:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:19:04.315 06:45:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:04.315 06:45:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.315 06:45:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:04.315 06:45:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:19:04.315 06:45:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.315 06:45:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:19:04.315 06:45:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:19:04.315 06:45:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.315 06:45:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:04.315 06:45:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.315 06:45:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71585 00:19:04.315 06:45:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71585 ']' 00:19:04.315 06:45:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71585 00:19:04.315 06:45:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:19:04.315 06:45:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:04.315 06:45:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71585 00:19:04.574 06:45:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:04.574 06:45:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:04.574 killing process with pid 71585 00:19:04.574 06:45:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71585' 00:19:04.574 06:45:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71585 00:19:04.574 06:45:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71585 00:19:05.958 00:19:05.958 real 0m2.636s 00:19:05.958 user 0m2.737s 00:19:05.958 sys 0m0.354s 00:19:05.958 06:45:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:05.958 06:45:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:05.958 ************************************ 00:19:05.958 END TEST xnvme_rpc 00:19:05.958 ************************************ 00:19:05.958 06:45:18 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:19:05.958 06:45:18 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:05.958 06:45:18 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:05.958 06:45:18 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:05.958 ************************************ 00:19:05.958 START TEST xnvme_bdevperf 00:19:05.958 ************************************ 00:19:05.958 06:45:18 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:19:05.958 06:45:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:19:05.958 06:45:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:19:05.958 06:45:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:05.958 06:45:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:19:05.958 06:45:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:05.958 06:45:18 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:05.958 06:45:18 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:05.958 { 00:19:05.958 "subsystems": [ 00:19:05.958 { 00:19:05.958 "subsystem": "bdev", 00:19:05.958 "config": [ 00:19:05.958 { 00:19:05.958 "params": { 00:19:05.958 "io_mechanism": "io_uring_cmd", 00:19:05.958 "conserve_cpu": true, 00:19:05.958 "filename": "/dev/ng0n1", 00:19:05.958 "name": "xnvme_bdev" 00:19:05.958 }, 00:19:05.958 "method": "bdev_xnvme_create" 00:19:05.958 }, 00:19:05.958 { 00:19:05.958 "method": "bdev_wait_for_examine" 00:19:05.958 } 00:19:05.958 ] 00:19:05.958 } 00:19:05.958 ] 00:19:05.958 } 00:19:05.958 [2024-12-06 06:45:18.679058] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:19:05.958 [2024-12-06 06:45:18.679174] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71655 ] 00:19:06.222 [2024-12-06 06:45:18.837967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:06.222 [2024-12-06 06:45:18.943205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:06.485 Running I/O for 5 seconds... 00:19:08.809 62464.00 IOPS, 244.00 MiB/s [2024-12-06T06:45:22.492Z] 63456.00 IOPS, 247.88 MiB/s [2024-12-06T06:45:23.493Z] 63338.67 IOPS, 247.42 MiB/s [2024-12-06T06:45:24.432Z] 62960.00 IOPS, 245.94 MiB/s 00:19:11.691 Latency(us) 00:19:11.691 [2024-12-06T06:45:24.432Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:11.691 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:19:11.691 xnvme_bdev : 5.00 63015.36 246.15 0.00 0.00 1011.55 595.50 3402.83 00:19:11.691 [2024-12-06T06:45:24.432Z] =================================================================================================================== 00:19:11.691 [2024-12-06T06:45:24.432Z] Total : 63015.36 246.15 0.00 0.00 1011.55 595.50 3402.83 00:19:12.257 06:45:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:12.257 06:45:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:19:12.257 06:45:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:12.257 06:45:24 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:12.257 06:45:24 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:12.257 { 00:19:12.257 "subsystems": [ 00:19:12.257 { 00:19:12.257 "subsystem": "bdev", 00:19:12.257 "config": [ 00:19:12.257 { 00:19:12.257 "params": { 00:19:12.257 "io_mechanism": "io_uring_cmd", 00:19:12.257 "conserve_cpu": true, 00:19:12.257 "filename": "/dev/ng0n1", 00:19:12.257 "name": "xnvme_bdev" 00:19:12.257 }, 00:19:12.258 "method": "bdev_xnvme_create" 00:19:12.258 }, 00:19:12.258 { 00:19:12.258 "method": "bdev_wait_for_examine" 00:19:12.258 } 00:19:12.258 ] 00:19:12.258 } 00:19:12.258 ] 00:19:12.258 } 00:19:12.258 [2024-12-06 06:45:24.984270] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:19:12.258 [2024-12-06 06:45:24.984391] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71729 ] 00:19:12.513 [2024-12-06 06:45:25.145441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:12.513 [2024-12-06 06:45:25.246717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:12.770 Running I/O for 5 seconds... 00:19:15.139 56640.00 IOPS, 221.25 MiB/s [2024-12-06T06:45:28.826Z] 55455.00 IOPS, 216.62 MiB/s [2024-12-06T06:45:29.764Z] 56618.00 IOPS, 221.16 MiB/s [2024-12-06T06:45:30.701Z] 57087.50 IOPS, 223.00 MiB/s [2024-12-06T06:45:30.701Z] 57522.80 IOPS, 224.70 MiB/s 00:19:17.960 Latency(us) 00:19:17.960 [2024-12-06T06:45:30.701Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:17.960 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:19:17.960 xnvme_bdev : 5.01 57474.03 224.51 0.00 0.00 1108.82 351.31 54445.29 00:19:17.960 [2024-12-06T06:45:30.701Z] =================================================================================================================== 00:19:17.960 [2024-12-06T06:45:30.701Z] Total : 57474.03 224.51 0.00 0.00 1108.82 351.31 54445.29 00:19:18.526 06:45:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:18.526 06:45:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:19:18.526 06:45:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:18.526 06:45:31 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:18.526 06:45:31 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:18.526 { 00:19:18.526 "subsystems": [ 00:19:18.526 { 00:19:18.526 "subsystem": "bdev", 00:19:18.526 "config": [ 00:19:18.526 { 00:19:18.526 "params": { 00:19:18.526 "io_mechanism": "io_uring_cmd", 00:19:18.526 "conserve_cpu": true, 00:19:18.526 "filename": "/dev/ng0n1", 00:19:18.526 "name": "xnvme_bdev" 00:19:18.526 }, 00:19:18.526 "method": "bdev_xnvme_create" 00:19:18.526 }, 00:19:18.526 { 00:19:18.526 "method": "bdev_wait_for_examine" 00:19:18.526 } 00:19:18.526 ] 00:19:18.526 } 00:19:18.526 ] 00:19:18.526 } 00:19:18.828 [2024-12-06 06:45:31.289651] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:19:18.828 [2024-12-06 06:45:31.289779] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71803 ] 00:19:18.828 [2024-12-06 06:45:31.450878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.828 [2024-12-06 06:45:31.549560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:19.086 Running I/O for 5 seconds... 00:19:21.391 86080.00 IOPS, 336.25 MiB/s [2024-12-06T06:45:35.061Z] 86016.00 IOPS, 336.00 MiB/s [2024-12-06T06:45:36.036Z] 86314.67 IOPS, 337.17 MiB/s [2024-12-06T06:45:36.968Z] 87664.00 IOPS, 342.44 MiB/s 00:19:24.227 Latency(us) 00:19:24.227 [2024-12-06T06:45:36.968Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:24.227 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:19:24.227 xnvme_bdev : 5.00 88847.37 347.06 0.00 0.00 716.79 363.91 2810.49 00:19:24.227 [2024-12-06T06:45:36.968Z] =================================================================================================================== 00:19:24.227 [2024-12-06T06:45:36.968Z] Total : 88847.37 347.06 0.00 0.00 716.79 363.91 2810.49 00:19:24.792 06:45:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:24.792 06:45:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:19:24.792 06:45:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:24.792 06:45:37 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:24.792 06:45:37 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:25.050 { 00:19:25.050 "subsystems": [ 00:19:25.050 { 00:19:25.050 "subsystem": "bdev", 00:19:25.050 "config": [ 00:19:25.050 { 00:19:25.050 "params": { 00:19:25.050 "io_mechanism": "io_uring_cmd", 00:19:25.050 "conserve_cpu": true, 00:19:25.050 "filename": "/dev/ng0n1", 00:19:25.050 "name": "xnvme_bdev" 00:19:25.050 }, 00:19:25.050 "method": "bdev_xnvme_create" 00:19:25.050 }, 00:19:25.050 { 00:19:25.050 "method": "bdev_wait_for_examine" 00:19:25.050 } 00:19:25.050 ] 00:19:25.050 } 00:19:25.050 ] 00:19:25.050 } 00:19:25.050 [2024-12-06 06:45:37.581387] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:19:25.050 [2024-12-06 06:45:37.581520] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71877 ] 00:19:25.050 [2024-12-06 06:45:37.740138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.309 [2024-12-06 06:45:37.842034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:25.567 Running I/O for 5 seconds... 00:19:27.435 47284.00 IOPS, 184.70 MiB/s [2024-12-06T06:45:41.108Z] 42588.00 IOPS, 166.36 MiB/s [2024-12-06T06:45:42.478Z] 40809.33 IOPS, 159.41 MiB/s [2024-12-06T06:45:43.410Z] 41091.00 IOPS, 160.51 MiB/s [2024-12-06T06:45:43.410Z] 42106.80 IOPS, 164.48 MiB/s 00:19:30.669 Latency(us) 00:19:30.669 [2024-12-06T06:45:43.410Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:30.669 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:19:30.669 xnvme_bdev : 5.00 42082.79 164.39 0.00 0.00 1515.47 51.20 128248.91 00:19:30.669 [2024-12-06T06:45:43.410Z] =================================================================================================================== 00:19:30.669 [2024-12-06T06:45:43.410Z] Total : 42082.79 164.39 0.00 0.00 1515.47 51.20 128248.91 00:19:31.235 00:19:31.235 real 0m25.194s 00:19:31.235 user 0m15.202s 00:19:31.235 sys 0m8.346s 00:19:31.235 06:45:43 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:31.235 06:45:43 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:31.235 ************************************ 00:19:31.235 END TEST xnvme_bdevperf 00:19:31.235 ************************************ 00:19:31.235 06:45:43 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:19:31.235 06:45:43 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:31.235 06:45:43 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:31.235 06:45:43 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:31.235 ************************************ 00:19:31.235 START TEST xnvme_fio_plugin 00:19:31.235 ************************************ 00:19:31.235 06:45:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:19:31.235 06:45:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:19:31.235 06:45:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:19:31.235 06:45:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:31.235 06:45:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:31.235 06:45:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:31.235 06:45:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:31.235 06:45:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:31.235 06:45:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:31.235 06:45:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:31.235 06:45:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:19:31.235 06:45:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:31.235 06:45:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:31.235 06:45:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:19:31.235 06:45:43 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:19:31.235 06:45:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:31.235 06:45:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:31.235 06:45:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:19:31.235 06:45:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:31.235 06:45:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:31.235 06:45:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:31.235 06:45:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:19:31.235 06:45:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:31.235 06:45:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:31.235 { 00:19:31.235 "subsystems": [ 00:19:31.235 { 00:19:31.235 "subsystem": "bdev", 00:19:31.235 "config": [ 00:19:31.235 { 00:19:31.235 "params": { 00:19:31.235 "io_mechanism": "io_uring_cmd", 00:19:31.235 "conserve_cpu": true, 00:19:31.235 "filename": "/dev/ng0n1", 00:19:31.235 "name": "xnvme_bdev" 00:19:31.235 }, 00:19:31.235 "method": "bdev_xnvme_create" 00:19:31.235 }, 00:19:31.235 { 00:19:31.235 "method": "bdev_wait_for_examine" 00:19:31.235 } 00:19:31.235 ] 00:19:31.235 } 00:19:31.235 ] 00:19:31.235 } 00:19:31.524 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:19:31.524 fio-3.35 00:19:31.524 Starting 1 thread 00:19:38.074 00:19:38.074 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71990: Fri Dec 6 06:45:49 2024 00:19:38.074 read: IOPS=63.8k, BW=249MiB/s (261MB/s)(1245MiB/5001msec) 00:19:38.074 slat (usec): min=2, max=443, avg= 3.53, stdev= 1.63 00:19:38.074 clat (usec): min=561, max=4209, avg=866.14, stdev=151.80 00:19:38.074 lat (usec): min=564, max=4213, avg=869.67, stdev=152.09 00:19:38.074 clat percentiles (usec): 00:19:38.074 | 1.00th=[ 644], 5.00th=[ 676], 10.00th=[ 701], 20.00th=[ 742], 00:19:38.074 | 30.00th=[ 775], 40.00th=[ 807], 50.00th=[ 840], 60.00th=[ 873], 00:19:38.074 | 70.00th=[ 914], 80.00th=[ 979], 90.00th=[ 1074], 95.00th=[ 1139], 00:19:38.074 | 99.00th=[ 1352], 99.50th=[ 1434], 99.90th=[ 1598], 99.95th=[ 1680], 00:19:38.074 | 99.99th=[ 2024] 00:19:38.074 bw ( KiB/s): min=245248, max=263680, per=99.94%, avg=254859.56, stdev=5847.75, samples=9 00:19:38.074 iops : min=61312, max=65920, avg=63714.89, stdev=1462.36, samples=9 00:19:38.074 lat (usec) : 750=23.22%, 1000=59.29% 00:19:38.074 lat (msec) : 2=17.48%, 4=0.01%, 10=0.01% 00:19:38.074 cpu : usr=47.42%, sys=50.30%, ctx=13, majf=0, minf=762 00:19:38.074 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:19:38.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:38.074 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:19:38.074 issued rwts: total=318843,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:38.074 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:38.074 00:19:38.074 Run status group 0 (all jobs): 00:19:38.074 READ: bw=249MiB/s (261MB/s), 249MiB/s-249MiB/s (261MB/s-261MB/s), io=1245MiB (1306MB), run=5001-5001msec 00:19:38.074 ----------------------------------------------------- 00:19:38.074 Suppressions used: 00:19:38.074 count bytes template 00:19:38.074 1 11 /usr/src/fio/parse.c 00:19:38.074 1 8 libtcmalloc_minimal.so 00:19:38.074 1 904 libcrypto.so 00:19:38.074 ----------------------------------------------------- 00:19:38.074 00:19:38.074 06:45:50 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:38.074 06:45:50 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:38.074 06:45:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:38.074 06:45:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:38.074 06:45:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:38.074 06:45:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:38.074 06:45:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:38.074 06:45:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:19:38.074 06:45:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:38.074 06:45:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:38.074 06:45:50 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:19:38.074 06:45:50 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:19:38.074 06:45:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:38.074 06:45:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:38.074 06:45:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:38.074 06:45:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:19:38.074 06:45:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:38.074 06:45:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:38.074 06:45:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:19:38.074 06:45:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:38.074 06:45:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:38.074 { 00:19:38.074 "subsystems": [ 00:19:38.074 { 00:19:38.074 "subsystem": "bdev", 00:19:38.074 "config": [ 00:19:38.074 { 00:19:38.074 "params": { 00:19:38.074 "io_mechanism": "io_uring_cmd", 00:19:38.074 "conserve_cpu": true, 00:19:38.074 "filename": "/dev/ng0n1", 00:19:38.074 "name": "xnvme_bdev" 00:19:38.074 }, 00:19:38.074 "method": "bdev_xnvme_create" 00:19:38.074 }, 00:19:38.074 { 00:19:38.074 "method": "bdev_wait_for_examine" 00:19:38.074 } 00:19:38.074 ] 00:19:38.074 } 00:19:38.074 ] 00:19:38.074 } 00:19:38.074 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:19:38.074 fio-3.35 00:19:38.074 Starting 1 thread 00:19:44.627 00:19:44.627 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72081: Fri Dec 6 06:45:56 2024 00:19:44.627 write: IOPS=56.1k, BW=219MiB/s (230MB/s)(1096MiB/5001msec); 0 zone resets 00:19:44.627 slat (usec): min=2, max=109, avg= 4.36, stdev= 2.25 00:19:44.627 clat (usec): min=593, max=2802, avg=971.61, stdev=211.93 00:19:44.627 lat (usec): min=596, max=2809, avg=975.97, stdev=213.04 00:19:44.627 clat percentiles (usec): 00:19:44.627 | 1.00th=[ 668], 5.00th=[ 717], 10.00th=[ 750], 20.00th=[ 799], 00:19:44.627 | 30.00th=[ 840], 40.00th=[ 881], 50.00th=[ 930], 60.00th=[ 979], 00:19:44.627 | 70.00th=[ 1045], 80.00th=[ 1123], 90.00th=[ 1237], 95.00th=[ 1385], 00:19:44.627 | 99.00th=[ 1680], 99.50th=[ 1795], 99.90th=[ 2008], 99.95th=[ 2114], 00:19:44.627 | 99.99th=[ 2573] 00:19:44.627 bw ( KiB/s): min=207872, max=248832, per=99.67%, avg=223573.33, stdev=11019.90, samples=9 00:19:44.627 iops : min=51968, max=62208, avg=55893.33, stdev=2754.98, samples=9 00:19:44.627 lat (usec) : 750=10.47%, 1000=52.68% 00:19:44.627 lat (msec) : 2=36.74%, 4=0.11% 00:19:44.627 cpu : usr=49.98%, sys=47.50%, ctx=9, majf=0, minf=763 00:19:44.627 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:19:44.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:44.627 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:19:44.627 issued rwts: total=0,280448,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:44.627 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:44.627 00:19:44.627 Run status group 0 (all jobs): 00:19:44.627 WRITE: bw=219MiB/s (230MB/s), 219MiB/s-219MiB/s (230MB/s-230MB/s), io=1096MiB (1149MB), run=5001-5001msec 00:19:44.628 ----------------------------------------------------- 00:19:44.628 Suppressions used: 00:19:44.628 count bytes template 00:19:44.628 1 11 /usr/src/fio/parse.c 00:19:44.628 1 8 libtcmalloc_minimal.so 00:19:44.628 1 904 libcrypto.so 00:19:44.628 ----------------------------------------------------- 00:19:44.628 00:19:44.628 00:19:44.628 real 0m13.498s 00:19:44.628 user 0m7.546s 00:19:44.628 sys 0m5.382s 00:19:44.628 06:45:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:44.628 06:45:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:44.628 ************************************ 00:19:44.628 END TEST xnvme_fio_plugin 00:19:44.628 ************************************ 00:19:44.885 06:45:57 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 71585 00:19:44.885 06:45:57 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 71585 ']' 00:19:44.885 06:45:57 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 71585 00:19:44.885 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71585) - No such process 00:19:44.885 Process with pid 71585 is not found 00:19:44.885 06:45:57 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 71585 is not found' 00:19:44.885 00:19:44.885 real 3m24.018s 00:19:44.885 user 1m46.897s 00:19:44.885 sys 1m19.737s 00:19:44.885 06:45:57 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:44.885 06:45:57 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:44.885 ************************************ 00:19:44.885 END TEST nvme_xnvme 00:19:44.885 ************************************ 00:19:44.885 06:45:57 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:19:44.885 06:45:57 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:44.885 06:45:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:44.885 06:45:57 -- common/autotest_common.sh@10 -- # set +x 00:19:44.885 ************************************ 00:19:44.885 START TEST blockdev_xnvme 00:19:44.885 ************************************ 00:19:44.885 06:45:57 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:19:44.885 * Looking for test storage... 00:19:44.885 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:19:44.885 06:45:57 blockdev_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:44.885 06:45:57 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:19:44.885 06:45:57 blockdev_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:44.885 06:45:57 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:44.885 06:45:57 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:44.885 06:45:57 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:44.885 06:45:57 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:44.885 06:45:57 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:19:44.885 06:45:57 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:19:44.885 06:45:57 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:19:44.885 06:45:57 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:19:44.885 06:45:57 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:19:44.885 06:45:57 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:19:44.885 06:45:57 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:19:44.885 06:45:57 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:44.885 06:45:57 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:19:44.885 06:45:57 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:19:44.885 06:45:57 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:44.885 06:45:57 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:44.885 06:45:57 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:19:44.885 06:45:57 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:19:44.885 06:45:57 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:44.885 06:45:57 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:19:44.885 06:45:57 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:19:44.885 06:45:57 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:19:44.885 06:45:57 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:19:44.885 06:45:57 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:44.885 06:45:57 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:19:44.885 06:45:57 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:19:44.885 06:45:57 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:44.885 06:45:57 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:44.885 06:45:57 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:19:44.885 06:45:57 blockdev_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:44.885 06:45:57 blockdev_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:44.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:44.885 --rc genhtml_branch_coverage=1 00:19:44.885 --rc genhtml_function_coverage=1 00:19:44.885 --rc genhtml_legend=1 00:19:44.885 --rc geninfo_all_blocks=1 00:19:44.885 --rc geninfo_unexecuted_blocks=1 00:19:44.885 00:19:44.885 ' 00:19:44.886 06:45:57 blockdev_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:44.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:44.886 --rc genhtml_branch_coverage=1 00:19:44.886 --rc genhtml_function_coverage=1 00:19:44.886 --rc genhtml_legend=1 00:19:44.886 --rc geninfo_all_blocks=1 00:19:44.886 --rc geninfo_unexecuted_blocks=1 00:19:44.886 00:19:44.886 ' 00:19:44.886 06:45:57 blockdev_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:44.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:44.886 --rc genhtml_branch_coverage=1 00:19:44.886 --rc genhtml_function_coverage=1 00:19:44.886 --rc genhtml_legend=1 00:19:44.886 --rc geninfo_all_blocks=1 00:19:44.886 --rc geninfo_unexecuted_blocks=1 00:19:44.886 00:19:44.886 ' 00:19:44.886 06:45:57 blockdev_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:44.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:44.886 --rc genhtml_branch_coverage=1 00:19:44.886 --rc genhtml_function_coverage=1 00:19:44.886 --rc genhtml_legend=1 00:19:44.886 --rc geninfo_all_blocks=1 00:19:44.886 --rc geninfo_unexecuted_blocks=1 00:19:44.886 00:19:44.886 ' 00:19:44.886 06:45:57 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:19:44.886 06:45:57 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:19:44.886 06:45:57 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:19:44.886 06:45:57 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:44.886 06:45:57 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:19:44.886 06:45:57 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:19:44.886 06:45:57 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:19:44.886 06:45:57 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:19:44.886 06:45:57 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:19:44.886 06:45:57 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:19:44.886 06:45:57 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:19:44.886 06:45:57 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:19:44.886 06:45:57 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:19:44.886 06:45:57 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:19:44.886 06:45:57 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:19:44.886 06:45:57 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:19:44.886 06:45:57 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:19:44.886 06:45:57 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:19:44.886 06:45:57 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:19:44.886 06:45:57 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:19:44.886 06:45:57 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:19:44.886 06:45:57 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:19:44.886 06:45:57 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:19:44.886 06:45:57 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:19:44.886 06:45:57 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=72215 00:19:44.886 06:45:57 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:19:44.886 06:45:57 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 72215 00:19:44.886 06:45:57 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:19:44.886 06:45:57 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 72215 ']' 00:19:44.886 06:45:57 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:44.886 06:45:57 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:44.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:44.886 06:45:57 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:44.886 06:45:57 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:44.886 06:45:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:45.159 [2024-12-06 06:45:57.632921] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:19:45.159 [2024-12-06 06:45:57.633552] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72215 ] 00:19:45.159 [2024-12-06 06:45:57.821810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:45.419 [2024-12-06 06:45:57.943164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:45.986 06:45:58 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:45.986 06:45:58 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:19:45.986 06:45:58 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:19:45.986 06:45:58 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:19:45.986 06:45:58 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:19:45.986 06:45:58 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:19:45.986 06:45:58 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:46.244 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:46.503 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:19:46.763 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:19:46.763 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:19:46.763 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:19:46.763 06:45:59 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:19:46.763 06:45:59 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:19:46.763 06:45:59 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:19:46.763 06:45:59 blockdev_xnvme -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:19:46.763 06:45:59 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:19:46.763 06:45:59 blockdev_xnvme -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:19:46.763 06:45:59 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:19:46.763 06:45:59 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:19:46.763 06:45:59 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:19:46.763 06:45:59 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:19:46.763 06:45:59 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:19:46.763 06:45:59 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:46.763 06:45:59 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:46.763 06:45:59 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:19:46.763 06:45:59 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n2 00:19:46.763 06:45:59 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:19:46.763 06:45:59 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:19:46.763 06:45:59 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:46.763 06:45:59 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:19:46.763 06:45:59 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n3 00:19:46.763 06:45:59 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:19:46.763 06:45:59 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:19:46.763 06:45:59 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:46.763 06:45:59 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:19:46.763 06:45:59 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:19:46.763 06:45:59 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:19:46.763 06:45:59 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:19:46.763 06:45:59 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:19:46.763 06:45:59 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:19:46.763 06:45:59 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:46.763 06:45:59 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:19:46.763 06:45:59 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:19:46.763 06:45:59 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:19:46.763 06:45:59 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2c2n1 00:19:46.763 06:45:59 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2c2n1 00:19:46.763 06:45:59 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2c2n1/queue/zoned ]] 00:19:46.763 06:45:59 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:46.763 06:45:59 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:19:46.763 06:45:59 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:19:46.763 06:45:59 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:19:46.763 06:45:59 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3n1 00:19:46.763 06:45:59 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:19:46.763 06:45:59 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:19:46.763 06:45:59 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:46.763 06:45:59 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:19:46.763 06:45:59 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:19:46.763 06:45:59 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:19:46.763 06:45:59 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:19:46.763 06:45:59 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:19:46.763 06:45:59 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:19:46.763 06:45:59 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:19:46.763 06:45:59 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:19:46.763 06:45:59 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:19:46.763 06:45:59 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:19:46.763 06:45:59 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:19:46.763 06:45:59 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:19:46.763 06:45:59 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:19:46.763 06:45:59 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:19:46.763 06:45:59 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:19:46.763 06:45:59 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:19:46.763 06:45:59 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:19:46.763 06:45:59 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:19:46.763 06:45:59 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:19:46.763 06:45:59 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:19:46.763 06:45:59 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:19:46.763 06:45:59 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:19:46.763 06:45:59 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:19:46.763 06:45:59 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:19:46.763 06:45:59 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:19:46.763 06:45:59 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:19:46.763 06:45:59 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.763 06:45:59 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:46.763 06:45:59 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:19:46.763 nvme0n1 00:19:46.763 nvme0n2 00:19:46.763 nvme0n3 00:19:46.763 nvme1n1 00:19:46.763 nvme2n1 00:19:46.763 nvme3n1 00:19:46.763 06:45:59 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.763 06:45:59 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:19:46.764 06:45:59 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.764 06:45:59 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:46.764 06:45:59 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.764 06:45:59 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:19:46.764 06:45:59 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:19:46.764 06:45:59 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.764 06:45:59 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:46.764 06:45:59 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.764 06:45:59 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:19:46.764 06:45:59 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.764 06:45:59 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:46.764 06:45:59 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.764 06:45:59 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:19:46.764 06:45:59 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.764 06:45:59 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:46.764 06:45:59 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.764 06:45:59 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:19:46.764 06:45:59 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:19:46.764 06:45:59 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.764 06:45:59 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:46.764 06:45:59 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:19:46.764 06:45:59 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.764 06:45:59 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:19:46.764 06:45:59 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:19:46.764 06:45:59 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "118f66dd-e405-4442-bec5-7e4edcfa5116"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "118f66dd-e405-4442-bec5-7e4edcfa5116",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "7fabc9a5-243d-4420-946c-a844139d6d49"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7fabc9a5-243d-4420-946c-a844139d6d49",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "c886a6fe-fa44-406d-8c55-e99506f7ffd1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c886a6fe-fa44-406d-8c55-e99506f7ffd1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "05ce557a-6e11-4d0f-b1bd-8a797820c631"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "05ce557a-6e11-4d0f-b1bd-8a797820c631",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "828593c0-2143-4819-a10c-44ecb2c12a75"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "828593c0-2143-4819-a10c-44ecb2c12a75",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "455f57c4-00db-42eb-ab82-0b076b042864"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "455f57c4-00db-42eb-ab82-0b076b042864",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:19:46.764 06:45:59 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:19:46.764 06:45:59 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:19:46.764 06:45:59 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:19:46.764 06:45:59 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 72215 00:19:46.764 06:45:59 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 72215 ']' 00:19:46.764 06:45:59 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 72215 00:19:46.764 06:45:59 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:19:46.764 06:45:59 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:46.764 06:45:59 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72215 00:19:47.022 06:45:59 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:47.022 06:45:59 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:47.022 killing process with pid 72215 00:19:47.022 06:45:59 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72215' 00:19:47.022 06:45:59 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 72215 00:19:47.022 06:45:59 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 72215 00:19:48.396 06:46:01 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:48.396 06:46:01 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:19:48.396 06:46:01 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:48.396 06:46:01 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:48.396 06:46:01 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:48.396 ************************************ 00:19:48.396 START TEST bdev_hello_world 00:19:48.396 ************************************ 00:19:48.396 06:46:01 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:19:48.396 [2024-12-06 06:46:01.076509] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:19:48.396 [2024-12-06 06:46:01.076624] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72489 ] 00:19:48.654 [2024-12-06 06:46:01.235993] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.654 [2024-12-06 06:46:01.333515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:49.220 [2024-12-06 06:46:01.665678] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:19:49.220 [2024-12-06 06:46:01.665721] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:19:49.220 [2024-12-06 06:46:01.665736] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:19:49.220 [2024-12-06 06:46:01.667572] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:19:49.220 [2024-12-06 06:46:01.667931] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:19:49.220 [2024-12-06 06:46:01.667953] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:19:49.220 [2024-12-06 06:46:01.668147] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:19:49.220 00:19:49.220 [2024-12-06 06:46:01.668167] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:19:49.786 00:19:49.786 real 0m1.361s 00:19:49.786 user 0m1.077s 00:19:49.786 sys 0m0.171s 00:19:49.786 06:46:02 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:49.786 06:46:02 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:19:49.786 ************************************ 00:19:49.786 END TEST bdev_hello_world 00:19:49.786 ************************************ 00:19:49.786 06:46:02 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:19:49.786 06:46:02 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:49.786 06:46:02 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:49.786 06:46:02 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:49.786 ************************************ 00:19:49.786 START TEST bdev_bounds 00:19:49.786 ************************************ 00:19:49.786 06:46:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:19:49.786 Process bdevio pid: 72527 00:19:49.786 06:46:02 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=72527 00:19:49.786 06:46:02 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:19:49.786 06:46:02 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 72527' 00:19:49.786 06:46:02 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 72527 00:19:49.786 06:46:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 72527 ']' 00:19:49.786 06:46:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:49.786 06:46:02 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:49.786 06:46:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:49.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:49.786 06:46:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:49.786 06:46:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:49.786 06:46:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:49.786 [2024-12-06 06:46:02.471594] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:19:49.786 [2024-12-06 06:46:02.472004] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72527 ] 00:19:50.044 [2024-12-06 06:46:02.629848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:50.044 [2024-12-06 06:46:02.732509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:50.045 [2024-12-06 06:46:02.732675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:50.045 [2024-12-06 06:46:02.732803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:50.609 06:46:03 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:50.609 06:46:03 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:19:50.609 06:46:03 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:19:50.867 I/O targets: 00:19:50.867 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:19:50.867 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:19:50.867 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:19:50.867 nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:19:50.867 nvme2n1: 262144 blocks of 4096 bytes (1024 MiB) 00:19:50.867 nvme3n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:19:50.867 00:19:50.867 00:19:50.867 CUnit - A unit testing framework for C - Version 2.1-3 00:19:50.867 http://cunit.sourceforge.net/ 00:19:50.867 00:19:50.867 00:19:50.867 Suite: bdevio tests on: nvme3n1 00:19:50.867 Test: blockdev write read block ...passed 00:19:50.867 Test: blockdev write zeroes read block ...passed 00:19:50.867 Test: blockdev write zeroes read no split ...passed 00:19:50.867 Test: blockdev write zeroes read split ...passed 00:19:50.867 Test: blockdev write zeroes read split partial ...passed 00:19:50.867 Test: blockdev reset ...passed 00:19:50.867 Test: blockdev write read 8 blocks ...passed 00:19:50.867 Test: blockdev write read size > 128k ...passed 00:19:50.867 Test: blockdev write read invalid size ...passed 00:19:50.867 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:50.867 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:50.867 Test: blockdev write read max offset ...passed 00:19:50.867 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:50.867 Test: blockdev writev readv 8 blocks ...passed 00:19:50.867 Test: blockdev writev readv 30 x 1block ...passed 00:19:50.867 Test: blockdev writev readv block ...passed 00:19:50.867 Test: blockdev writev readv size > 128k ...passed 00:19:50.867 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:50.867 Test: blockdev comparev and writev ...passed 00:19:50.867 Test: blockdev nvme passthru rw ...passed 00:19:50.867 Test: blockdev nvme passthru vendor specific ...passed 00:19:50.867 Test: blockdev nvme admin passthru ...passed 00:19:50.867 Test: blockdev copy ...passed 00:19:50.867 Suite: bdevio tests on: nvme2n1 00:19:50.867 Test: blockdev write read block ...passed 00:19:50.867 Test: blockdev write zeroes read block ...passed 00:19:50.867 Test: blockdev write zeroes read no split ...passed 00:19:50.867 Test: blockdev write zeroes read split ...passed 00:19:50.867 Test: blockdev write zeroes read split partial ...passed 00:19:50.867 Test: blockdev reset ...passed 00:19:50.867 Test: blockdev write read 8 blocks ...passed 00:19:50.867 Test: blockdev write read size > 128k ...passed 00:19:50.868 Test: blockdev write read invalid size ...passed 00:19:50.868 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:50.868 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:50.868 Test: blockdev write read max offset ...passed 00:19:50.868 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:50.868 Test: blockdev writev readv 8 blocks ...passed 00:19:50.868 Test: blockdev writev readv 30 x 1block ...passed 00:19:50.868 Test: blockdev writev readv block ...passed 00:19:50.868 Test: blockdev writev readv size > 128k ...passed 00:19:50.868 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:50.868 Test: blockdev comparev and writev ...passed 00:19:50.868 Test: blockdev nvme passthru rw ...passed 00:19:50.868 Test: blockdev nvme passthru vendor specific ...passed 00:19:50.868 Test: blockdev nvme admin passthru ...passed 00:19:50.868 Test: blockdev copy ...passed 00:19:50.868 Suite: bdevio tests on: nvme1n1 00:19:50.868 Test: blockdev write read block ...passed 00:19:50.868 Test: blockdev write zeroes read block ...passed 00:19:50.868 Test: blockdev write zeroes read no split ...passed 00:19:50.868 Test: blockdev write zeroes read split ...passed 00:19:50.868 Test: blockdev write zeroes read split partial ...passed 00:19:50.868 Test: blockdev reset ...passed 00:19:50.868 Test: blockdev write read 8 blocks ...passed 00:19:50.868 Test: blockdev write read size > 128k ...passed 00:19:50.868 Test: blockdev write read invalid size ...passed 00:19:50.868 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:50.868 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:50.868 Test: blockdev write read max offset ...passed 00:19:50.868 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:50.868 Test: blockdev writev readv 8 blocks ...passed 00:19:50.868 Test: blockdev writev readv 30 x 1block ...passed 00:19:50.868 Test: blockdev writev readv block ...passed 00:19:50.868 Test: blockdev writev readv size > 128k ...passed 00:19:50.868 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:50.868 Test: blockdev comparev and writev ...passed 00:19:50.868 Test: blockdev nvme passthru rw ...passed 00:19:50.868 Test: blockdev nvme passthru vendor specific ...passed 00:19:50.868 Test: blockdev nvme admin passthru ...passed 00:19:50.868 Test: blockdev copy ...passed 00:19:50.868 Suite: bdevio tests on: nvme0n3 00:19:50.868 Test: blockdev write read block ...passed 00:19:50.868 Test: blockdev write zeroes read block ...passed 00:19:50.868 Test: blockdev write zeroes read no split ...passed 00:19:50.868 Test: blockdev write zeroes read split ...passed 00:19:51.126 Test: blockdev write zeroes read split partial ...passed 00:19:51.126 Test: blockdev reset ...passed 00:19:51.126 Test: blockdev write read 8 blocks ...passed 00:19:51.126 Test: blockdev write read size > 128k ...passed 00:19:51.126 Test: blockdev write read invalid size ...passed 00:19:51.126 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:51.126 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:51.126 Test: blockdev write read max offset ...passed 00:19:51.126 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:51.126 Test: blockdev writev readv 8 blocks ...passed 00:19:51.126 Test: blockdev writev readv 30 x 1block ...passed 00:19:51.126 Test: blockdev writev readv block ...passed 00:19:51.126 Test: blockdev writev readv size > 128k ...passed 00:19:51.126 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:51.126 Test: blockdev comparev and writev ...passed 00:19:51.126 Test: blockdev nvme passthru rw ...passed 00:19:51.126 Test: blockdev nvme passthru vendor specific ...passed 00:19:51.126 Test: blockdev nvme admin passthru ...passed 00:19:51.126 Test: blockdev copy ...passed 00:19:51.126 Suite: bdevio tests on: nvme0n2 00:19:51.126 Test: blockdev write read block ...passed 00:19:51.126 Test: blockdev write zeroes read block ...passed 00:19:51.126 Test: blockdev write zeroes read no split ...passed 00:19:51.126 Test: blockdev write zeroes read split ...passed 00:19:51.126 Test: blockdev write zeroes read split partial ...passed 00:19:51.126 Test: blockdev reset ...passed 00:19:51.126 Test: blockdev write read 8 blocks ...passed 00:19:51.126 Test: blockdev write read size > 128k ...passed 00:19:51.126 Test: blockdev write read invalid size ...passed 00:19:51.126 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:51.126 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:51.126 Test: blockdev write read max offset ...passed 00:19:51.126 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:51.126 Test: blockdev writev readv 8 blocks ...passed 00:19:51.126 Test: blockdev writev readv 30 x 1block ...passed 00:19:51.126 Test: blockdev writev readv block ...passed 00:19:51.126 Test: blockdev writev readv size > 128k ...passed 00:19:51.126 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:51.126 Test: blockdev comparev and writev ...passed 00:19:51.126 Test: blockdev nvme passthru rw ...passed 00:19:51.126 Test: blockdev nvme passthru vendor specific ...passed 00:19:51.126 Test: blockdev nvme admin passthru ...passed 00:19:51.126 Test: blockdev copy ...passed 00:19:51.126 Suite: bdevio tests on: nvme0n1 00:19:51.126 Test: blockdev write read block ...passed 00:19:51.126 Test: blockdev write zeroes read block ...passed 00:19:51.126 Test: blockdev write zeroes read no split ...passed 00:19:51.126 Test: blockdev write zeroes read split ...passed 00:19:51.126 Test: blockdev write zeroes read split partial ...passed 00:19:51.126 Test: blockdev reset ...passed 00:19:51.126 Test: blockdev write read 8 blocks ...passed 00:19:51.126 Test: blockdev write read size > 128k ...passed 00:19:51.126 Test: blockdev write read invalid size ...passed 00:19:51.126 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:51.126 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:51.126 Test: blockdev write read max offset ...passed 00:19:51.126 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:51.126 Test: blockdev writev readv 8 blocks ...passed 00:19:51.126 Test: blockdev writev readv 30 x 1block ...passed 00:19:51.126 Test: blockdev writev readv block ...passed 00:19:51.126 Test: blockdev writev readv size > 128k ...passed 00:19:51.126 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:51.126 Test: blockdev comparev and writev ...passed 00:19:51.126 Test: blockdev nvme passthru rw ...passed 00:19:51.126 Test: blockdev nvme passthru vendor specific ...passed 00:19:51.126 Test: blockdev nvme admin passthru ...passed 00:19:51.126 Test: blockdev copy ...passed 00:19:51.126 00:19:51.126 Run Summary: Type Total Ran Passed Failed Inactive 00:19:51.126 suites 6 6 n/a 0 0 00:19:51.126 tests 138 138 138 0 0 00:19:51.126 asserts 780 780 780 0 n/a 00:19:51.126 00:19:51.126 Elapsed time = 0.819 seconds 00:19:51.126 0 00:19:51.126 06:46:03 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 72527 00:19:51.126 06:46:03 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 72527 ']' 00:19:51.126 06:46:03 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 72527 00:19:51.126 06:46:03 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:19:51.126 06:46:03 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:51.126 06:46:03 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72527 00:19:51.126 killing process with pid 72527 00:19:51.126 06:46:03 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:51.126 06:46:03 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:51.126 06:46:03 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72527' 00:19:51.126 06:46:03 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 72527 00:19:51.126 06:46:03 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 72527 00:19:51.692 ************************************ 00:19:51.692 END TEST bdev_bounds 00:19:51.692 ************************************ 00:19:51.692 06:46:04 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:19:51.692 00:19:51.692 real 0m1.944s 00:19:51.692 user 0m4.934s 00:19:51.692 sys 0m0.266s 00:19:51.692 06:46:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:51.692 06:46:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:51.692 06:46:04 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:19:51.692 06:46:04 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:51.692 06:46:04 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:51.692 06:46:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:51.692 ************************************ 00:19:51.692 START TEST bdev_nbd 00:19:51.692 ************************************ 00:19:51.692 06:46:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:19:51.692 06:46:04 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:19:51.692 06:46:04 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:19:51.692 06:46:04 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:51.692 06:46:04 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:51.692 06:46:04 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:19:51.692 06:46:04 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:19:51.692 06:46:04 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:19:51.692 06:46:04 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:19:51.692 06:46:04 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:19:51.692 06:46:04 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:19:51.692 06:46:04 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:19:51.692 06:46:04 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:51.692 06:46:04 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:19:51.692 06:46:04 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:19:51.692 06:46:04 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:19:51.692 06:46:04 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=72577 00:19:51.692 06:46:04 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:19:51.692 06:46:04 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 72577 /var/tmp/spdk-nbd.sock 00:19:51.692 06:46:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 72577 ']' 00:19:51.692 06:46:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:19:51.692 06:46:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:51.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:19:51.692 06:46:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:19:51.692 06:46:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:51.693 06:46:04 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:51.693 06:46:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:51.950 [2024-12-06 06:46:04.464696] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:19:51.950 [2024-12-06 06:46:04.464811] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:51.950 [2024-12-06 06:46:04.623725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:52.208 [2024-12-06 06:46:04.787310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:52.775 06:46:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:52.775 06:46:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:19:52.775 06:46:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:19:52.775 06:46:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:52.775 06:46:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:19:52.775 06:46:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:19:52.775 06:46:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:19:52.775 06:46:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:52.775 06:46:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:19:52.775 06:46:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:19:52.775 06:46:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:19:52.775 06:46:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:19:52.775 06:46:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:19:52.775 06:46:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:52.775 06:46:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:19:52.775 06:46:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:19:52.775 06:46:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:19:52.775 06:46:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:19:52.775 06:46:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:52.776 06:46:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:52.776 06:46:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:52.776 06:46:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:52.776 06:46:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:52.776 06:46:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:52.776 06:46:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:52.776 06:46:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:52.776 06:46:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:52.776 1+0 records in 00:19:52.776 1+0 records out 00:19:52.776 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000461481 s, 8.9 MB/s 00:19:52.776 06:46:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:52.776 06:46:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:52.776 06:46:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:52.776 06:46:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:52.776 06:46:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:52.776 06:46:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:52.776 06:46:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:52.776 06:46:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:19:53.051 06:46:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:19:53.052 06:46:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:19:53.052 06:46:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:19:53.052 06:46:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:53.052 06:46:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:53.052 06:46:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:53.052 06:46:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:53.052 06:46:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:53.052 06:46:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:53.052 06:46:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:53.052 06:46:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:53.052 06:46:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:53.052 1+0 records in 00:19:53.052 1+0 records out 00:19:53.052 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000295884 s, 13.8 MB/s 00:19:53.052 06:46:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:53.052 06:46:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:53.052 06:46:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:53.052 06:46:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:53.052 06:46:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:53.052 06:46:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:53.052 06:46:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:53.052 06:46:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:19:53.310 06:46:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:19:53.310 06:46:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:19:53.310 06:46:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:19:53.310 06:46:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:19:53.310 06:46:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:53.310 06:46:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:53.310 06:46:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:53.310 06:46:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:19:53.310 06:46:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:53.310 06:46:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:53.310 06:46:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:53.310 06:46:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:53.310 1+0 records in 00:19:53.310 1+0 records out 00:19:53.310 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000368715 s, 11.1 MB/s 00:19:53.310 06:46:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:53.310 06:46:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:53.310 06:46:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:53.310 06:46:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:53.310 06:46:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:53.310 06:46:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:53.310 06:46:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:53.310 06:46:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:19:53.568 06:46:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:19:53.568 06:46:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:19:53.568 06:46:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:19:53.568 06:46:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:19:53.568 06:46:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:53.568 06:46:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:53.568 06:46:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:53.568 06:46:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:19:53.568 06:46:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:53.568 06:46:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:53.568 06:46:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:53.568 06:46:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:53.568 1+0 records in 00:19:53.568 1+0 records out 00:19:53.568 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000426138 s, 9.6 MB/s 00:19:53.568 06:46:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:53.568 06:46:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:53.568 06:46:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:53.568 06:46:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:53.568 06:46:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:53.568 06:46:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:53.568 06:46:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:53.568 06:46:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:19:53.826 06:46:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:19:53.826 06:46:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:19:53.826 06:46:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:19:53.826 06:46:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:19:53.826 06:46:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:53.826 06:46:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:53.826 06:46:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:53.826 06:46:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:19:53.826 06:46:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:53.826 06:46:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:53.826 06:46:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:53.826 06:46:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:53.826 1+0 records in 00:19:53.826 1+0 records out 00:19:53.826 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000373017 s, 11.0 MB/s 00:19:53.826 06:46:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:53.826 06:46:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:53.826 06:46:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:53.826 06:46:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:53.826 06:46:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:53.826 06:46:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:53.826 06:46:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:53.826 06:46:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:19:53.826 06:46:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:19:54.083 06:46:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:19:54.083 06:46:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:19:54.083 06:46:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:19:54.083 06:46:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:54.083 06:46:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:54.083 06:46:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:54.083 06:46:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:19:54.083 06:46:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:54.083 06:46:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:54.083 06:46:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:54.083 06:46:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:54.083 1+0 records in 00:19:54.083 1+0 records out 00:19:54.083 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000675247 s, 6.1 MB/s 00:19:54.083 06:46:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:54.083 06:46:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:54.083 06:46:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:54.083 06:46:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:54.083 06:46:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:54.083 06:46:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:54.083 06:46:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:54.083 06:46:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:54.083 06:46:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:19:54.083 { 00:19:54.083 "nbd_device": "/dev/nbd0", 00:19:54.083 "bdev_name": "nvme0n1" 00:19:54.083 }, 00:19:54.083 { 00:19:54.083 "nbd_device": "/dev/nbd1", 00:19:54.083 "bdev_name": "nvme0n2" 00:19:54.083 }, 00:19:54.083 { 00:19:54.083 "nbd_device": "/dev/nbd2", 00:19:54.083 "bdev_name": "nvme0n3" 00:19:54.083 }, 00:19:54.083 { 00:19:54.083 "nbd_device": "/dev/nbd3", 00:19:54.083 "bdev_name": "nvme1n1" 00:19:54.083 }, 00:19:54.083 { 00:19:54.083 "nbd_device": "/dev/nbd4", 00:19:54.083 "bdev_name": "nvme2n1" 00:19:54.083 }, 00:19:54.083 { 00:19:54.083 "nbd_device": "/dev/nbd5", 00:19:54.083 "bdev_name": "nvme3n1" 00:19:54.083 } 00:19:54.083 ]' 00:19:54.083 06:46:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:19:54.083 06:46:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:19:54.083 { 00:19:54.083 "nbd_device": "/dev/nbd0", 00:19:54.083 "bdev_name": "nvme0n1" 00:19:54.083 }, 00:19:54.083 { 00:19:54.083 "nbd_device": "/dev/nbd1", 00:19:54.083 "bdev_name": "nvme0n2" 00:19:54.083 }, 00:19:54.083 { 00:19:54.083 "nbd_device": "/dev/nbd2", 00:19:54.083 "bdev_name": "nvme0n3" 00:19:54.083 }, 00:19:54.083 { 00:19:54.083 "nbd_device": "/dev/nbd3", 00:19:54.083 "bdev_name": "nvme1n1" 00:19:54.083 }, 00:19:54.083 { 00:19:54.084 "nbd_device": "/dev/nbd4", 00:19:54.084 "bdev_name": "nvme2n1" 00:19:54.084 }, 00:19:54.084 { 00:19:54.084 "nbd_device": "/dev/nbd5", 00:19:54.084 "bdev_name": "nvme3n1" 00:19:54.084 } 00:19:54.084 ]' 00:19:54.084 06:46:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:19:54.084 06:46:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:19:54.084 06:46:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:54.084 06:46:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:19:54.084 06:46:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:54.084 06:46:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:54.084 06:46:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:54.084 06:46:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:54.340 06:46:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:54.340 06:46:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:54.340 06:46:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:54.340 06:46:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:54.340 06:46:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:54.340 06:46:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:54.340 06:46:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:54.340 06:46:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:54.340 06:46:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:54.340 06:46:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:19:54.597 06:46:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:54.597 06:46:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:54.597 06:46:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:54.597 06:46:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:54.597 06:46:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:54.597 06:46:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:54.597 06:46:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:54.597 06:46:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:54.597 06:46:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:54.597 06:46:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:19:54.853 06:46:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:19:54.853 06:46:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:19:54.853 06:46:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:19:54.853 06:46:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:54.853 06:46:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:54.853 06:46:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:19:54.853 06:46:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:54.853 06:46:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:54.853 06:46:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:54.853 06:46:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:19:55.111 06:46:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:19:55.111 06:46:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:19:55.111 06:46:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:19:55.111 06:46:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:55.111 06:46:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:55.111 06:46:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:19:55.111 06:46:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:55.111 06:46:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:55.111 06:46:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:55.111 06:46:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:19:55.111 06:46:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:19:55.111 06:46:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:19:55.111 06:46:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:19:55.111 06:46:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:55.111 06:46:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:55.111 06:46:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:19:55.111 06:46:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:55.111 06:46:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:55.111 06:46:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:55.111 06:46:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:19:55.369 06:46:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:19:55.369 06:46:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:19:55.369 06:46:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:19:55.369 06:46:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:55.369 06:46:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:55.369 06:46:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:19:55.369 06:46:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:55.369 06:46:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:55.369 06:46:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:55.369 06:46:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:55.369 06:46:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:55.627 06:46:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:55.627 06:46:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:55.627 06:46:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:55.627 06:46:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:55.627 06:46:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:55.627 06:46:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:55.627 06:46:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:55.627 06:46:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:55.627 06:46:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:55.627 06:46:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:19:55.627 06:46:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:19:55.627 06:46:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:19:55.627 06:46:08 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:19:55.627 06:46:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:55.627 06:46:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:19:55.627 06:46:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:19:55.627 06:46:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:55.627 06:46:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:19:55.627 06:46:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:19:55.627 06:46:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:55.627 06:46:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:19:55.627 06:46:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:55.627 06:46:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:55.627 06:46:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:55.627 06:46:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:19:55.627 06:46:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:55.627 06:46:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:55.627 06:46:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:19:55.883 /dev/nbd0 00:19:55.883 06:46:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:55.884 06:46:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:55.884 06:46:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:55.884 06:46:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:55.884 06:46:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:55.884 06:46:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:55.884 06:46:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:55.884 06:46:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:55.884 06:46:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:55.884 06:46:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:55.884 06:46:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:55.884 1+0 records in 00:19:55.884 1+0 records out 00:19:55.884 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000313979 s, 13.0 MB/s 00:19:55.884 06:46:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:55.884 06:46:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:55.884 06:46:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:55.884 06:46:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:55.884 06:46:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:55.884 06:46:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:55.884 06:46:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:55.884 06:46:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:19:56.140 /dev/nbd1 00:19:56.140 06:46:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:56.140 06:46:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:56.140 06:46:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:56.140 06:46:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:56.140 06:46:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:56.140 06:46:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:56.140 06:46:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:56.140 06:46:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:56.140 06:46:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:56.140 06:46:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:56.140 06:46:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:56.140 1+0 records in 00:19:56.140 1+0 records out 00:19:56.140 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000406041 s, 10.1 MB/s 00:19:56.140 06:46:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:56.140 06:46:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:56.140 06:46:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:56.140 06:46:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:56.140 06:46:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:56.140 06:46:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:56.140 06:46:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:56.140 06:46:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:19:56.396 /dev/nbd10 00:19:56.396 06:46:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:19:56.396 06:46:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:19:56.396 06:46:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:19:56.396 06:46:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:56.396 06:46:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:56.396 06:46:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:56.396 06:46:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:19:56.396 06:46:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:56.396 06:46:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:56.396 06:46:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:56.396 06:46:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:56.396 1+0 records in 00:19:56.396 1+0 records out 00:19:56.396 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000363734 s, 11.3 MB/s 00:19:56.396 06:46:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:56.396 06:46:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:56.396 06:46:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:56.396 06:46:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:56.396 06:46:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:56.396 06:46:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:56.396 06:46:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:56.397 06:46:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:19:56.653 /dev/nbd11 00:19:56.653 06:46:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:19:56.653 06:46:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:19:56.654 06:46:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:19:56.654 06:46:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:56.654 06:46:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:56.654 06:46:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:56.654 06:46:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:19:56.654 06:46:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:56.654 06:46:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:56.654 06:46:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:56.654 06:46:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:56.654 1+0 records in 00:19:56.654 1+0 records out 00:19:56.654 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000447421 s, 9.2 MB/s 00:19:56.654 06:46:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:56.654 06:46:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:56.654 06:46:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:56.654 06:46:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:56.654 06:46:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:56.654 06:46:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:56.654 06:46:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:56.654 06:46:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:19:56.911 /dev/nbd12 00:19:56.911 06:46:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:19:56.911 06:46:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:19:56.911 06:46:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:19:56.911 06:46:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:56.911 06:46:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:56.911 06:46:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:56.911 06:46:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:19:56.911 06:46:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:56.911 06:46:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:56.911 06:46:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:56.911 06:46:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:56.911 1+0 records in 00:19:56.911 1+0 records out 00:19:56.911 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000384931 s, 10.6 MB/s 00:19:56.911 06:46:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:56.911 06:46:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:56.911 06:46:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:56.911 06:46:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:56.911 06:46:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:56.911 06:46:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:56.911 06:46:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:56.911 06:46:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:19:57.169 /dev/nbd13 00:19:57.169 06:46:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:19:57.169 06:46:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:19:57.169 06:46:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:19:57.169 06:46:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:57.169 06:46:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:57.169 06:46:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:57.169 06:46:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:19:57.169 06:46:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:57.169 06:46:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:57.169 06:46:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:57.169 06:46:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:57.169 1+0 records in 00:19:57.169 1+0 records out 00:19:57.169 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00054368 s, 7.5 MB/s 00:19:57.169 06:46:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:57.169 06:46:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:57.169 06:46:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:57.169 06:46:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:57.169 06:46:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:57.169 06:46:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:57.169 06:46:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:57.169 06:46:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:57.169 06:46:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:57.169 06:46:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:57.427 06:46:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:19:57.427 { 00:19:57.427 "nbd_device": "/dev/nbd0", 00:19:57.427 "bdev_name": "nvme0n1" 00:19:57.427 }, 00:19:57.427 { 00:19:57.427 "nbd_device": "/dev/nbd1", 00:19:57.427 "bdev_name": "nvme0n2" 00:19:57.427 }, 00:19:57.427 { 00:19:57.427 "nbd_device": "/dev/nbd10", 00:19:57.427 "bdev_name": "nvme0n3" 00:19:57.427 }, 00:19:57.427 { 00:19:57.427 "nbd_device": "/dev/nbd11", 00:19:57.427 "bdev_name": "nvme1n1" 00:19:57.427 }, 00:19:57.427 { 00:19:57.427 "nbd_device": "/dev/nbd12", 00:19:57.427 "bdev_name": "nvme2n1" 00:19:57.427 }, 00:19:57.427 { 00:19:57.427 "nbd_device": "/dev/nbd13", 00:19:57.427 "bdev_name": "nvme3n1" 00:19:57.427 } 00:19:57.427 ]' 00:19:57.427 06:46:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:19:57.427 { 00:19:57.427 "nbd_device": "/dev/nbd0", 00:19:57.427 "bdev_name": "nvme0n1" 00:19:57.427 }, 00:19:57.427 { 00:19:57.427 "nbd_device": "/dev/nbd1", 00:19:57.427 "bdev_name": "nvme0n2" 00:19:57.427 }, 00:19:57.427 { 00:19:57.427 "nbd_device": "/dev/nbd10", 00:19:57.427 "bdev_name": "nvme0n3" 00:19:57.427 }, 00:19:57.427 { 00:19:57.427 "nbd_device": "/dev/nbd11", 00:19:57.427 "bdev_name": "nvme1n1" 00:19:57.427 }, 00:19:57.427 { 00:19:57.427 "nbd_device": "/dev/nbd12", 00:19:57.427 "bdev_name": "nvme2n1" 00:19:57.427 }, 00:19:57.427 { 00:19:57.427 "nbd_device": "/dev/nbd13", 00:19:57.427 "bdev_name": "nvme3n1" 00:19:57.427 } 00:19:57.427 ]' 00:19:57.427 06:46:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:57.427 06:46:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:19:57.427 /dev/nbd1 00:19:57.427 /dev/nbd10 00:19:57.427 /dev/nbd11 00:19:57.427 /dev/nbd12 00:19:57.427 /dev/nbd13' 00:19:57.427 06:46:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:19:57.427 /dev/nbd1 00:19:57.427 /dev/nbd10 00:19:57.427 /dev/nbd11 00:19:57.427 /dev/nbd12 00:19:57.427 /dev/nbd13' 00:19:57.427 06:46:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:57.427 06:46:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:19:57.427 06:46:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:19:57.427 06:46:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:19:57.427 06:46:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:19:57.427 06:46:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:19:57.427 06:46:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:57.427 06:46:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:57.427 06:46:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:19:57.427 06:46:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:57.427 06:46:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:19:57.427 06:46:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:19:57.427 256+0 records in 00:19:57.427 256+0 records out 00:19:57.427 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107419 s, 97.6 MB/s 00:19:57.427 06:46:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:57.427 06:46:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:19:57.427 256+0 records in 00:19:57.427 256+0 records out 00:19:57.427 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0614683 s, 17.1 MB/s 00:19:57.427 06:46:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:57.427 06:46:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:19:57.427 256+0 records in 00:19:57.427 256+0 records out 00:19:57.427 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0589194 s, 17.8 MB/s 00:19:57.427 06:46:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:57.427 06:46:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:19:57.685 256+0 records in 00:19:57.685 256+0 records out 00:19:57.685 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0731658 s, 14.3 MB/s 00:19:57.685 06:46:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:57.685 06:46:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:19:57.685 256+0 records in 00:19:57.685 256+0 records out 00:19:57.685 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.071557 s, 14.7 MB/s 00:19:57.686 06:46:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:57.686 06:46:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:19:57.686 256+0 records in 00:19:57.686 256+0 records out 00:19:57.686 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.05864 s, 17.9 MB/s 00:19:57.686 06:46:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:57.686 06:46:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:19:57.944 256+0 records in 00:19:57.944 256+0 records out 00:19:57.944 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0690268 s, 15.2 MB/s 00:19:57.944 06:46:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:19:57.944 06:46:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:57.944 06:46:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:57.944 06:46:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:19:57.944 06:46:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:57.944 06:46:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:19:57.944 06:46:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:19:57.944 06:46:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:57.944 06:46:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:19:57.944 06:46:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:57.944 06:46:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:19:57.944 06:46:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:57.944 06:46:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:19:57.944 06:46:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:57.944 06:46:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:19:57.944 06:46:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:57.944 06:46:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:19:57.944 06:46:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:57.944 06:46:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:19:57.944 06:46:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:57.944 06:46:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:19:57.944 06:46:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:57.944 06:46:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:57.944 06:46:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:57.944 06:46:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:57.944 06:46:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:57.944 06:46:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:57.944 06:46:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:58.202 06:46:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:58.202 06:46:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:58.202 06:46:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:58.202 06:46:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:58.202 06:46:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:58.202 06:46:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:58.202 06:46:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:58.202 06:46:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:58.202 06:46:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:19:58.202 06:46:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:58.202 06:46:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:58.202 06:46:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:58.202 06:46:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:58.202 06:46:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:58.202 06:46:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:58.202 06:46:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:58.202 06:46:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:58.202 06:46:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:58.202 06:46:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:19:58.472 06:46:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:19:58.472 06:46:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:19:58.472 06:46:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:19:58.472 06:46:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:58.472 06:46:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:58.472 06:46:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:19:58.472 06:46:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:58.472 06:46:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:58.472 06:46:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:58.472 06:46:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:19:58.737 06:46:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:19:58.737 06:46:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:19:58.737 06:46:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:19:58.737 06:46:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:58.737 06:46:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:58.737 06:46:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:19:58.737 06:46:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:58.737 06:46:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:58.737 06:46:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:58.738 06:46:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:19:58.995 06:46:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:19:58.995 06:46:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:19:58.995 06:46:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:19:58.995 06:46:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:58.995 06:46:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:58.995 06:46:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:19:58.995 06:46:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:58.995 06:46:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:58.995 06:46:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:58.995 06:46:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:19:59.253 06:46:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:19:59.253 06:46:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:19:59.253 06:46:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:19:59.253 06:46:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:59.253 06:46:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:59.253 06:46:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:19:59.253 06:46:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:59.253 06:46:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:59.253 06:46:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:59.253 06:46:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:59.253 06:46:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:59.511 06:46:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:59.511 06:46:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:59.511 06:46:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:59.511 06:46:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:59.511 06:46:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:59.511 06:46:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:59.511 06:46:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:59.511 06:46:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:59.511 06:46:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:59.511 06:46:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:19:59.511 06:46:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:19:59.511 06:46:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:19:59.511 06:46:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:59.511 06:46:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:59.511 06:46:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:19:59.511 06:46:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:19:59.769 malloc_lvol_verify 00:19:59.769 06:46:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:19:59.769 35329cd2-074b-42ec-9618-ff3af7126e69 00:19:59.769 06:46:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:20:00.028 68cc39e2-571e-4309-93cd-617ca908ada9 00:20:00.028 06:46:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:20:00.285 /dev/nbd0 00:20:00.285 06:46:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:20:00.285 06:46:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:20:00.285 06:46:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:20:00.285 06:46:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:20:00.285 06:46:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:20:00.285 mke2fs 1.47.0 (5-Feb-2023) 00:20:00.285 Discarding device blocks: 0/4096 done 00:20:00.285 Creating filesystem with 4096 1k blocks and 1024 inodes 00:20:00.285 00:20:00.285 Allocating group tables: 0/1 done 00:20:00.285 Writing inode tables: 0/1 done 00:20:00.285 Creating journal (1024 blocks): done 00:20:00.285 Writing superblocks and filesystem accounting information: 0/1 done 00:20:00.285 00:20:00.285 06:46:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:00.285 06:46:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:00.285 06:46:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:00.285 06:46:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:00.285 06:46:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:00.285 06:46:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:00.285 06:46:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:00.543 06:46:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:00.543 06:46:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:00.543 06:46:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:00.543 06:46:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:00.543 06:46:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:00.543 06:46:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:00.543 06:46:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:00.543 06:46:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:00.543 06:46:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 72577 00:20:00.543 06:46:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 72577 ']' 00:20:00.543 06:46:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 72577 00:20:00.543 06:46:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:20:00.543 06:46:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:00.543 06:46:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72577 00:20:00.543 06:46:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:00.543 06:46:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:00.543 killing process with pid 72577 00:20:00.543 06:46:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72577' 00:20:00.543 06:46:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 72577 00:20:00.543 06:46:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 72577 00:20:01.476 06:46:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:20:01.476 00:20:01.476 real 0m9.522s 00:20:01.476 user 0m13.571s 00:20:01.476 sys 0m3.100s 00:20:01.476 06:46:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:01.476 06:46:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:20:01.476 ************************************ 00:20:01.476 END TEST bdev_nbd 00:20:01.476 ************************************ 00:20:01.476 06:46:13 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:20:01.476 06:46:13 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:20:01.476 06:46:13 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:20:01.476 06:46:13 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:20:01.476 06:46:13 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:01.476 06:46:13 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:01.476 06:46:13 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:01.476 ************************************ 00:20:01.476 START TEST bdev_fio 00:20:01.476 ************************************ 00:20:01.476 06:46:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:20:01.476 06:46:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:20:01.477 06:46:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:20:01.477 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:20:01.477 06:46:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:20:01.477 06:46:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:20:01.477 06:46:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:20:01.477 06:46:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:20:01.477 06:46:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:20:01.477 06:46:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:01.477 06:46:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:20:01.477 06:46:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:20:01.477 06:46:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:20:01.477 06:46:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:20:01.477 06:46:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:20:01.477 06:46:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:20:01.477 06:46:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:20:01.477 06:46:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:01.477 06:46:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:20:01.477 06:46:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:20:01.477 06:46:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:20:01.477 06:46:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:20:01.477 06:46:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:20:01.477 06:46:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:20:01.477 06:46:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:20:01.477 06:46:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:20:01.477 06:46:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:20:01.477 06:46:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:20:01.477 06:46:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:20:01.477 06:46:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:20:01.477 06:46:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:20:01.477 06:46:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:20:01.477 06:46:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:20:01.477 06:46:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:20:01.477 06:46:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:20:01.477 06:46:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:20:01.477 06:46:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:20:01.477 06:46:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:20:01.477 06:46:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:20:01.477 06:46:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:20:01.477 06:46:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:20:01.477 06:46:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:20:01.477 06:46:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:20:01.477 06:46:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:20:01.477 06:46:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:01.477 06:46:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:20:01.477 06:46:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:01.477 06:46:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:20:01.477 ************************************ 00:20:01.477 START TEST bdev_fio_rw_verify 00:20:01.477 ************************************ 00:20:01.477 06:46:14 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:01.477 06:46:14 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:01.477 06:46:14 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:01.477 06:46:14 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:01.477 06:46:14 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:01.477 06:46:14 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:01.477 06:46:14 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:20:01.477 06:46:14 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:01.477 06:46:14 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:01.477 06:46:14 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:01.477 06:46:14 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:20:01.477 06:46:14 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:01.477 06:46:14 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:01.477 06:46:14 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:01.477 06:46:14 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:20:01.477 06:46:14 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:01.477 06:46:14 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:01.735 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:20:01.735 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:20:01.735 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:20:01.735 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:20:01.735 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:20:01.735 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:20:01.735 fio-3.35 00:20:01.735 Starting 6 threads 00:20:13.930 00:20:13.930 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=72983: Fri Dec 6 06:46:24 2024 00:20:13.930 read: IOPS=46.4k, BW=181MiB/s (190MB/s)(1813MiB/10001msec) 00:20:13.930 slat (usec): min=2, max=1120, avg= 4.74, stdev= 3.57 00:20:13.930 clat (usec): min=73, max=4093, avg=375.91, stdev=189.24 00:20:13.930 lat (usec): min=77, max=4098, avg=380.65, stdev=189.65 00:20:13.930 clat percentiles (usec): 00:20:13.930 | 50.000th=[ 351], 99.000th=[ 938], 99.900th=[ 1385], 99.990th=[ 3654], 00:20:13.930 | 99.999th=[ 4080] 00:20:13.930 write: IOPS=46.8k, BW=183MiB/s (192MB/s)(1830MiB/10001msec); 0 zone resets 00:20:13.930 slat (usec): min=10, max=2290, avg=19.63, stdev=23.50 00:20:13.930 clat (usec): min=71, max=3119, avg=448.44, stdev=194.35 00:20:13.930 lat (usec): min=86, max=3134, avg=468.07, stdev=196.97 00:20:13.930 clat percentiles (usec): 00:20:13.930 | 50.000th=[ 420], 99.000th=[ 1045], 99.900th=[ 1418], 99.990th=[ 2278], 00:20:13.930 | 99.999th=[ 3064] 00:20:13.930 bw ( KiB/s): min=166128, max=205305, per=100.00%, avg=187633.11, stdev=1728.38, samples=114 00:20:13.930 iops : min=41531, max=51325, avg=46907.47, stdev=432.06, samples=114 00:20:13.930 lat (usec) : 100=0.06%, 250=19.45%, 500=53.50%, 750=21.62%, 1000=4.36% 00:20:13.930 lat (msec) : 2=0.97%, 4=0.03%, 10=0.01% 00:20:13.930 cpu : usr=53.68%, sys=30.98%, ctx=10579, majf=0, minf=36712 00:20:13.930 IO depths : 1=12.4%, 2=24.8%, 4=50.2%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:13.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.930 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.930 issued rwts: total=464093,468444,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:13.930 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:13.930 00:20:13.930 Run status group 0 (all jobs): 00:20:13.930 READ: bw=181MiB/s (190MB/s), 181MiB/s-181MiB/s (190MB/s-190MB/s), io=1813MiB (1901MB), run=10001-10001msec 00:20:13.930 WRITE: bw=183MiB/s (192MB/s), 183MiB/s-183MiB/s (192MB/s-192MB/s), io=1830MiB (1919MB), run=10001-10001msec 00:20:13.930 ----------------------------------------------------- 00:20:13.930 Suppressions used: 00:20:13.930 count bytes template 00:20:13.930 6 48 /usr/src/fio/parse.c 00:20:13.930 4000 384000 /usr/src/fio/iolog.c 00:20:13.930 1 8 libtcmalloc_minimal.so 00:20:13.930 1 904 libcrypto.so 00:20:13.930 ----------------------------------------------------- 00:20:13.930 00:20:13.930 00:20:13.930 real 0m11.882s 00:20:13.930 user 0m33.675s 00:20:13.930 sys 0m18.876s 00:20:13.930 06:46:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:13.930 06:46:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:20:13.930 ************************************ 00:20:13.930 END TEST bdev_fio_rw_verify 00:20:13.930 ************************************ 00:20:13.930 06:46:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:20:13.930 06:46:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:13.930 06:46:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:20:13.930 06:46:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:13.930 06:46:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:20:13.930 06:46:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:20:13.930 06:46:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:20:13.930 06:46:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:20:13.930 06:46:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:20:13.930 06:46:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:20:13.930 06:46:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:20:13.930 06:46:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:13.930 06:46:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:20:13.930 06:46:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:20:13.930 06:46:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:20:13.930 06:46:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:20:13.930 06:46:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:20:13.930 06:46:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "118f66dd-e405-4442-bec5-7e4edcfa5116"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "118f66dd-e405-4442-bec5-7e4edcfa5116",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "7fabc9a5-243d-4420-946c-a844139d6d49"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7fabc9a5-243d-4420-946c-a844139d6d49",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "c886a6fe-fa44-406d-8c55-e99506f7ffd1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c886a6fe-fa44-406d-8c55-e99506f7ffd1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "05ce557a-6e11-4d0f-b1bd-8a797820c631"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "05ce557a-6e11-4d0f-b1bd-8a797820c631",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "828593c0-2143-4819-a10c-44ecb2c12a75"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "828593c0-2143-4819-a10c-44ecb2c12a75",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "455f57c4-00db-42eb-ab82-0b076b042864"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "455f57c4-00db-42eb-ab82-0b076b042864",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:20:13.930 06:46:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:20:13.930 06:46:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:13.930 06:46:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:20:13.931 /home/vagrant/spdk_repo/spdk 00:20:13.931 06:46:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:20:13.931 06:46:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:20:13.931 00:20:13.931 real 0m12.031s 00:20:13.931 user 0m33.752s 00:20:13.931 sys 0m18.941s 00:20:13.931 06:46:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:13.931 06:46:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:20:13.931 ************************************ 00:20:13.931 END TEST bdev_fio 00:20:13.931 ************************************ 00:20:13.931 06:46:26 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:13.931 06:46:26 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:20:13.931 06:46:26 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:20:13.931 06:46:26 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:13.931 06:46:26 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:13.931 ************************************ 00:20:13.931 START TEST bdev_verify 00:20:13.931 ************************************ 00:20:13.931 06:46:26 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:20:13.931 [2024-12-06 06:46:26.085664] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:20:13.931 [2024-12-06 06:46:26.086052] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73156 ] 00:20:13.931 [2024-12-06 06:46:26.241704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:13.931 [2024-12-06 06:46:26.345155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:13.931 [2024-12-06 06:46:26.345174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:14.249 Running I/O for 5 seconds... 00:20:16.544 23264.00 IOPS, 90.88 MiB/s [2024-12-06T06:46:30.217Z] 24480.00 IOPS, 95.62 MiB/s [2024-12-06T06:46:31.159Z] 24928.00 IOPS, 97.37 MiB/s [2024-12-06T06:46:32.097Z] 24488.00 IOPS, 95.66 MiB/s 00:20:19.356 Latency(us) 00:20:19.356 [2024-12-06T06:46:32.097Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:19.356 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:19.356 Verification LBA range: start 0x0 length 0x80000 00:20:19.356 nvme0n1 : 5.08 1736.98 6.79 0.00 0.00 73547.21 17039.36 75013.51 00:20:19.356 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:19.356 Verification LBA range: start 0x80000 length 0x80000 00:20:19.356 nvme0n1 : 5.07 1741.21 6.80 0.00 0.00 73369.19 11695.66 74206.92 00:20:19.356 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:19.356 Verification LBA range: start 0x0 length 0x80000 00:20:19.356 nvme0n2 : 5.08 1739.85 6.80 0.00 0.00 73286.29 11191.53 77433.30 00:20:19.356 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:19.356 Verification LBA range: start 0x80000 length 0x80000 00:20:19.356 nvme0n2 : 5.06 1745.14 6.82 0.00 0.00 73010.72 17039.36 62511.26 00:20:19.356 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:19.356 Verification LBA range: start 0x0 length 0x80000 00:20:19.356 nvme0n3 : 5.09 1736.46 6.78 0.00 0.00 73264.21 15325.34 62107.96 00:20:19.356 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:19.356 Verification LBA range: start 0x80000 length 0x80000 00:20:19.356 nvme0n3 : 5.08 1740.09 6.80 0.00 0.00 73067.15 13006.38 64124.46 00:20:19.356 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:19.356 Verification LBA range: start 0x0 length 0xa0000 00:20:19.356 nvme1n1 : 5.09 1735.96 6.78 0.00 0.00 73120.60 10334.52 67350.84 00:20:19.356 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:19.356 Verification LBA range: start 0xa0000 length 0xa0000 00:20:19.356 nvme1n1 : 5.08 1738.68 6.79 0.00 0.00 72965.74 11544.42 65334.35 00:20:19.356 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:19.356 Verification LBA range: start 0x0 length 0x20000 00:20:19.356 nvme2n1 : 5.08 1739.03 6.79 0.00 0.00 72828.52 9779.99 74610.22 00:20:19.356 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:19.356 Verification LBA range: start 0x20000 length 0x20000 00:20:19.356 nvme2n1 : 5.08 1738.06 6.79 0.00 0.00 72836.38 9527.93 70173.93 00:20:19.356 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:19.356 Verification LBA range: start 0x0 length 0xbd0bd 00:20:19.356 nvme3n1 : 5.08 3229.54 12.62 0.00 0.00 39109.52 3503.66 61704.66 00:20:19.356 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:19.356 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:20:19.356 nvme3n1 : 5.08 3264.35 12.75 0.00 0.00 38654.19 3654.89 60091.47 00:20:19.356 [2024-12-06T06:46:32.097Z] =================================================================================================================== 00:20:19.356 [2024-12-06T06:46:32.097Z] Total : 23885.36 93.30 0.00 0.00 63815.29 3503.66 77433.30 00:20:19.921 00:20:19.921 real 0m6.551s 00:20:19.921 user 0m10.488s 00:20:19.921 sys 0m1.685s 00:20:19.921 06:46:32 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:19.921 06:46:32 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:20:19.921 ************************************ 00:20:19.921 END TEST bdev_verify 00:20:19.921 ************************************ 00:20:19.922 06:46:32 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:20:19.922 06:46:32 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:20:19.922 06:46:32 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:19.922 06:46:32 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:19.922 ************************************ 00:20:19.922 START TEST bdev_verify_big_io 00:20:19.922 ************************************ 00:20:19.922 06:46:32 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:20:20.179 [2024-12-06 06:46:32.686573] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:20:20.179 [2024-12-06 06:46:32.687125] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73256 ] 00:20:20.179 [2024-12-06 06:46:32.844297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:20.436 [2024-12-06 06:46:32.945684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:20.436 [2024-12-06 06:46:32.945790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:20.694 Running I/O for 5 seconds... 00:20:25.871 1544.00 IOPS, 96.50 MiB/s [2024-12-06T06:46:39.593Z] 2148.00 IOPS, 134.25 MiB/s [2024-12-06T06:46:39.593Z] 2730.67 IOPS, 170.67 MiB/s 00:20:26.852 Latency(us) 00:20:26.852 [2024-12-06T06:46:39.593Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:26.852 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:26.852 Verification LBA range: start 0x0 length 0x8000 00:20:26.852 nvme0n1 : 6.03 106.21 6.64 0.00 0.00 1161602.91 125022.52 1038896.84 00:20:26.852 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:26.852 Verification LBA range: start 0x8000 length 0x8000 00:20:26.852 nvme0n1 : 5.76 111.11 6.94 0.00 0.00 1119597.65 28432.54 1090519.04 00:20:26.852 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:26.852 Verification LBA range: start 0x0 length 0x8000 00:20:26.852 nvme0n2 : 6.03 106.19 6.64 0.00 0.00 1109254.30 20064.10 1058255.16 00:20:26.852 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:26.852 Verification LBA range: start 0x8000 length 0x8000 00:20:26.852 nvme0n2 : 5.65 102.03 6.38 0.00 0.00 1173187.52 76223.41 1516402.22 00:20:26.852 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:26.852 Verification LBA range: start 0x0 length 0x8000 00:20:26.852 nvme0n3 : 5.93 91.77 5.74 0.00 0.00 1271458.45 144380.85 2619826.81 00:20:26.852 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:26.852 Verification LBA range: start 0x8000 length 0x8000 00:20:26.852 nvme0n3 : 5.97 104.60 6.54 0.00 0.00 1076885.10 100018.02 1174405.12 00:20:26.852 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:26.852 Verification LBA range: start 0x0 length 0xa000 00:20:26.852 nvme1n1 : 6.03 112.75 7.05 0.00 0.00 1003926.41 101227.91 1355082.83 00:20:26.852 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:26.852 Verification LBA range: start 0xa000 length 0xa000 00:20:26.852 nvme1n1 : 6.06 113.50 7.09 0.00 0.00 978498.79 124215.93 2026171.47 00:20:26.852 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:26.852 Verification LBA range: start 0x0 length 0x2000 00:20:26.852 nvme2n1 : 6.05 103.19 6.45 0.00 0.00 1065319.15 9175.04 2168132.53 00:20:26.852 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:26.852 Verification LBA range: start 0x2000 length 0x2000 00:20:26.852 nvme2n1 : 6.08 152.53 9.53 0.00 0.00 715440.12 17946.78 1258291.20 00:20:26.852 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:26.852 Verification LBA range: start 0x0 length 0xbd0b 00:20:26.852 nvme3n1 : 6.05 190.30 11.89 0.00 0.00 566272.14 2306.36 1058255.16 00:20:26.852 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:26.852 Verification LBA range: start 0xbd0b length 0xbd0b 00:20:26.852 nvme3n1 : 6.08 144.74 9.05 0.00 0.00 731848.16 7158.55 1387346.71 00:20:26.852 [2024-12-06T06:46:39.593Z] =================================================================================================================== 00:20:26.852 [2024-12-06T06:46:39.593Z] Total : 1438.91 89.93 0.00 0.00 950584.19 2306.36 2619826.81 00:20:27.783 00:20:27.783 real 0m7.771s 00:20:27.783 user 0m14.405s 00:20:27.783 sys 0m0.390s 00:20:27.783 06:46:40 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:27.783 06:46:40 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:20:27.783 ************************************ 00:20:27.783 END TEST bdev_verify_big_io 00:20:27.783 ************************************ 00:20:27.783 06:46:40 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:27.783 06:46:40 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:20:27.783 06:46:40 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:27.783 06:46:40 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:27.783 ************************************ 00:20:27.783 START TEST bdev_write_zeroes 00:20:27.783 ************************************ 00:20:27.783 06:46:40 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:27.783 [2024-12-06 06:46:40.494673] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:20:27.783 [2024-12-06 06:46:40.494790] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73366 ] 00:20:28.040 [2024-12-06 06:46:40.650876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:28.040 [2024-12-06 06:46:40.755613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:28.604 Running I/O for 1 seconds... 00:20:29.535 78495.00 IOPS, 306.62 MiB/s 00:20:29.535 Latency(us) 00:20:29.535 [2024-12-06T06:46:42.276Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:29.535 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:29.535 nvme0n1 : 1.02 11284.72 44.08 0.00 0.00 11332.49 5116.85 28029.24 00:20:29.535 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:29.535 nvme0n2 : 1.02 11271.91 44.03 0.00 0.00 11337.12 5116.85 26819.35 00:20:29.535 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:29.535 nvme0n3 : 1.02 11258.70 43.98 0.00 0.00 11342.21 5142.06 25609.45 00:20:29.535 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:29.535 nvme1n1 : 1.02 11246.04 43.93 0.00 0.00 11346.72 5142.06 24399.56 00:20:29.535 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:29.535 nvme2n1 : 1.03 11233.41 43.88 0.00 0.00 11351.49 5167.26 23592.96 00:20:29.535 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:29.535 nvme3n1 : 1.03 21167.10 82.68 0.00 0.00 6017.05 2192.94 17543.48 00:20:29.535 [2024-12-06T06:46:42.276Z] =================================================================================================================== 00:20:29.535 [2024-12-06T06:46:42.276Z] Total : 77461.88 302.59 0.00 0.00 9880.43 2192.94 28029.24 00:20:30.467 00:20:30.467 real 0m2.464s 00:20:30.467 user 0m1.742s 00:20:30.467 sys 0m0.551s 00:20:30.467 06:46:42 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:30.467 06:46:42 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:20:30.467 ************************************ 00:20:30.467 END TEST bdev_write_zeroes 00:20:30.467 ************************************ 00:20:30.467 06:46:42 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:30.467 06:46:42 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:20:30.467 06:46:42 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:30.467 06:46:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:30.467 ************************************ 00:20:30.467 START TEST bdev_json_nonenclosed 00:20:30.467 ************************************ 00:20:30.467 06:46:42 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:30.467 [2024-12-06 06:46:43.000303] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:20:30.467 [2024-12-06 06:46:43.000424] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73408 ] 00:20:30.467 [2024-12-06 06:46:43.157731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.725 [2024-12-06 06:46:43.256033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:30.725 [2024-12-06 06:46:43.256117] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:20:30.725 [2024-12-06 06:46:43.256134] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:30.725 [2024-12-06 06:46:43.256143] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:30.725 00:20:30.725 real 0m0.492s 00:20:30.725 user 0m0.312s 00:20:30.725 sys 0m0.076s 00:20:30.725 06:46:43 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:30.725 06:46:43 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:20:30.725 ************************************ 00:20:30.725 END TEST bdev_json_nonenclosed 00:20:30.725 ************************************ 00:20:30.982 06:46:43 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:30.982 06:46:43 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:20:30.982 06:46:43 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:30.982 06:46:43 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:30.982 ************************************ 00:20:30.982 START TEST bdev_json_nonarray 00:20:30.982 ************************************ 00:20:30.982 06:46:43 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:30.982 [2024-12-06 06:46:43.533713] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:20:30.982 [2024-12-06 06:46:43.533829] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73434 ] 00:20:30.982 [2024-12-06 06:46:43.697700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:31.240 [2024-12-06 06:46:43.794451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:31.240 [2024-12-06 06:46:43.794541] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:20:31.240 [2024-12-06 06:46:43.794558] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:31.240 [2024-12-06 06:46:43.794567] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:31.499 00:20:31.499 real 0m0.543s 00:20:31.499 user 0m0.342s 00:20:31.499 sys 0m0.096s 00:20:31.499 06:46:44 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:31.499 06:46:44 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:20:31.499 ************************************ 00:20:31.499 END TEST bdev_json_nonarray 00:20:31.499 ************************************ 00:20:31.499 06:46:44 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:20:31.499 06:46:44 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:20:31.499 06:46:44 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:20:31.499 06:46:44 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:20:31.499 06:46:44 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:20:31.499 06:46:44 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:20:31.499 06:46:44 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:31.499 06:46:44 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:20:31.499 06:46:44 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:20:31.499 06:46:44 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:20:31.499 06:46:44 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:20:31.499 06:46:44 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:31.757 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:53.291 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:53.291 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:22:03.323 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:22:03.323 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:22:03.323 00:22:03.323 real 2m17.455s 00:22:03.323 user 1m28.505s 00:22:03.323 sys 2m53.705s 00:22:03.323 06:48:14 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:03.323 ************************************ 00:22:03.323 END TEST blockdev_xnvme 00:22:03.323 ************************************ 00:22:03.323 06:48:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:03.323 06:48:14 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:22:03.323 06:48:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:03.323 06:48:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:03.323 06:48:14 -- common/autotest_common.sh@10 -- # set +x 00:22:03.323 ************************************ 00:22:03.323 START TEST ublk 00:22:03.323 ************************************ 00:22:03.323 06:48:14 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:22:03.323 * Looking for test storage... 00:22:03.323 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:22:03.323 06:48:14 ublk -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:03.323 06:48:14 ublk -- common/autotest_common.sh@1711 -- # lcov --version 00:22:03.323 06:48:14 ublk -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:03.323 06:48:15 ublk -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:03.323 06:48:15 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:03.323 06:48:15 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:03.323 06:48:15 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:03.323 06:48:15 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:22:03.323 06:48:15 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:22:03.323 06:48:15 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:22:03.323 06:48:15 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:22:03.323 06:48:15 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:22:03.323 06:48:15 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:22:03.323 06:48:15 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:22:03.323 06:48:15 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:03.323 06:48:15 ublk -- scripts/common.sh@344 -- # case "$op" in 00:22:03.323 06:48:15 ublk -- scripts/common.sh@345 -- # : 1 00:22:03.323 06:48:15 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:03.323 06:48:15 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:03.323 06:48:15 ublk -- scripts/common.sh@365 -- # decimal 1 00:22:03.323 06:48:15 ublk -- scripts/common.sh@353 -- # local d=1 00:22:03.323 06:48:15 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:03.323 06:48:15 ublk -- scripts/common.sh@355 -- # echo 1 00:22:03.323 06:48:15 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:22:03.323 06:48:15 ublk -- scripts/common.sh@366 -- # decimal 2 00:22:03.323 06:48:15 ublk -- scripts/common.sh@353 -- # local d=2 00:22:03.323 06:48:15 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:03.323 06:48:15 ublk -- scripts/common.sh@355 -- # echo 2 00:22:03.323 06:48:15 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:22:03.323 06:48:15 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:03.323 06:48:15 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:03.323 06:48:15 ublk -- scripts/common.sh@368 -- # return 0 00:22:03.324 06:48:15 ublk -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:03.324 06:48:15 ublk -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:03.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:03.324 --rc genhtml_branch_coverage=1 00:22:03.324 --rc genhtml_function_coverage=1 00:22:03.324 --rc genhtml_legend=1 00:22:03.324 --rc geninfo_all_blocks=1 00:22:03.324 --rc geninfo_unexecuted_blocks=1 00:22:03.324 00:22:03.324 ' 00:22:03.324 06:48:15 ublk -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:03.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:03.324 --rc genhtml_branch_coverage=1 00:22:03.324 --rc genhtml_function_coverage=1 00:22:03.324 --rc genhtml_legend=1 00:22:03.324 --rc geninfo_all_blocks=1 00:22:03.324 --rc geninfo_unexecuted_blocks=1 00:22:03.324 00:22:03.324 ' 00:22:03.324 06:48:15 ublk -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:03.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:03.324 --rc genhtml_branch_coverage=1 00:22:03.324 --rc genhtml_function_coverage=1 00:22:03.324 --rc genhtml_legend=1 00:22:03.324 --rc geninfo_all_blocks=1 00:22:03.324 --rc geninfo_unexecuted_blocks=1 00:22:03.324 00:22:03.324 ' 00:22:03.324 06:48:15 ublk -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:03.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:03.324 --rc genhtml_branch_coverage=1 00:22:03.324 --rc genhtml_function_coverage=1 00:22:03.324 --rc genhtml_legend=1 00:22:03.324 --rc geninfo_all_blocks=1 00:22:03.324 --rc geninfo_unexecuted_blocks=1 00:22:03.324 00:22:03.324 ' 00:22:03.324 06:48:15 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:22:03.324 06:48:15 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:22:03.324 06:48:15 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:22:03.324 06:48:15 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:22:03.324 06:48:15 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:22:03.324 06:48:15 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:22:03.324 06:48:15 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:22:03.324 06:48:15 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:22:03.324 06:48:15 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:22:03.324 06:48:15 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:22:03.324 06:48:15 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:22:03.324 06:48:15 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:22:03.324 06:48:15 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:22:03.324 06:48:15 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:22:03.324 06:48:15 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:22:03.324 06:48:15 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:22:03.324 06:48:15 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:22:03.324 06:48:15 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:22:03.324 06:48:15 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:22:03.324 06:48:15 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:22:03.324 06:48:15 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:03.324 06:48:15 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:03.324 06:48:15 ublk -- common/autotest_common.sh@10 -- # set +x 00:22:03.324 ************************************ 00:22:03.324 START TEST test_save_ublk_config 00:22:03.324 ************************************ 00:22:03.324 06:48:15 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:22:03.324 06:48:15 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:22:03.324 06:48:15 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=73753 00:22:03.324 06:48:15 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:22:03.324 06:48:15 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:22:03.324 06:48:15 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 73753 00:22:03.324 06:48:15 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 73753 ']' 00:22:03.324 06:48:15 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:03.324 06:48:15 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:03.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:03.324 06:48:15 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:03.324 06:48:15 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:03.324 06:48:15 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:22:03.324 [2024-12-06 06:48:15.138754] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:22:03.324 [2024-12-06 06:48:15.138852] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73753 ] 00:22:03.324 [2024-12-06 06:48:15.294635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.324 [2024-12-06 06:48:15.395336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:03.324 06:48:15 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:03.324 06:48:15 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:22:03.324 06:48:15 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:22:03.324 06:48:15 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:22:03.324 06:48:16 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.324 06:48:16 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:22:04.254 [2024-12-06 06:48:16.641498] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:22:04.254 [2024-12-06 06:48:16.642356] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:22:04.254 malloc0 00:22:04.254 [2024-12-06 06:48:16.674479] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:22:04.254 [2024-12-06 06:48:16.674557] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:22:04.254 [2024-12-06 06:48:16.674567] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:22:04.254 [2024-12-06 06:48:16.674574] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:22:06.270 [2024-12-06 06:48:18.625767] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:22:06.270 [2024-12-06 06:48:18.625807] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:22:07.644 [2024-12-06 06:48:20.289511] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:22:07.644 [2024-12-06 06:48:20.289652] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:22:10.269 [2024-12-06 06:48:22.658364] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:22:10.269 0 00:22:10.269 06:48:22 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.269 06:48:22 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:22:10.269 06:48:22 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.269 06:48:22 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:22:10.269 06:48:22 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.269 06:48:22 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:22:10.269 "subsystems": [ 00:22:10.269 { 00:22:10.269 "subsystem": "fsdev", 00:22:10.269 "config": [ 00:22:10.269 { 00:22:10.269 "method": "fsdev_set_opts", 00:22:10.269 "params": { 00:22:10.269 "fsdev_io_pool_size": 65535, 00:22:10.269 "fsdev_io_cache_size": 256 00:22:10.269 } 00:22:10.269 } 00:22:10.269 ] 00:22:10.269 }, 00:22:10.269 { 00:22:10.269 "subsystem": "keyring", 00:22:10.269 "config": [] 00:22:10.269 }, 00:22:10.269 { 00:22:10.269 "subsystem": "iobuf", 00:22:10.269 "config": [ 00:22:10.269 { 00:22:10.269 "method": "iobuf_set_options", 00:22:10.269 "params": { 00:22:10.269 "small_pool_count": 8192, 00:22:10.269 "large_pool_count": 1024, 00:22:10.269 "small_bufsize": 8192, 00:22:10.269 "large_bufsize": 135168, 00:22:10.269 "enable_numa": false 00:22:10.269 } 00:22:10.269 } 00:22:10.269 ] 00:22:10.269 }, 00:22:10.269 { 00:22:10.269 "subsystem": "sock", 00:22:10.269 "config": [ 00:22:10.269 { 00:22:10.269 "method": "sock_set_default_impl", 00:22:10.269 "params": { 00:22:10.269 "impl_name": "posix" 00:22:10.269 } 00:22:10.269 }, 00:22:10.269 { 00:22:10.269 "method": "sock_impl_set_options", 00:22:10.269 "params": { 00:22:10.269 "impl_name": "ssl", 00:22:10.269 "recv_buf_size": 4096, 00:22:10.269 "send_buf_size": 4096, 00:22:10.269 "enable_recv_pipe": true, 00:22:10.269 "enable_quickack": false, 00:22:10.269 "enable_placement_id": 0, 00:22:10.269 "enable_zerocopy_send_server": true, 00:22:10.269 "enable_zerocopy_send_client": false, 00:22:10.269 "zerocopy_threshold": 0, 00:22:10.269 "tls_version": 0, 00:22:10.269 "enable_ktls": false 00:22:10.269 } 00:22:10.269 }, 00:22:10.269 { 00:22:10.269 "method": "sock_impl_set_options", 00:22:10.269 "params": { 00:22:10.269 "impl_name": "posix", 00:22:10.269 "recv_buf_size": 2097152, 00:22:10.269 "send_buf_size": 2097152, 00:22:10.269 "enable_recv_pipe": true, 00:22:10.269 "enable_quickack": false, 00:22:10.269 "enable_placement_id": 0, 00:22:10.269 "enable_zerocopy_send_server": true, 00:22:10.269 "enable_zerocopy_send_client": false, 00:22:10.269 "zerocopy_threshold": 0, 00:22:10.269 "tls_version": 0, 00:22:10.269 "enable_ktls": false 00:22:10.269 } 00:22:10.269 } 00:22:10.269 ] 00:22:10.269 }, 00:22:10.269 { 00:22:10.269 "subsystem": "vmd", 00:22:10.269 "config": [] 00:22:10.269 }, 00:22:10.269 { 00:22:10.269 "subsystem": "accel", 00:22:10.269 "config": [ 00:22:10.269 { 00:22:10.270 "method": "accel_set_options", 00:22:10.270 "params": { 00:22:10.270 "small_cache_size": 128, 00:22:10.270 "large_cache_size": 16, 00:22:10.270 "task_count": 2048, 00:22:10.270 "sequence_count": 2048, 00:22:10.270 "buf_count": 2048 00:22:10.270 } 00:22:10.270 } 00:22:10.270 ] 00:22:10.270 }, 00:22:10.270 { 00:22:10.270 "subsystem": "bdev", 00:22:10.270 "config": [ 00:22:10.270 { 00:22:10.270 "method": "bdev_set_options", 00:22:10.270 "params": { 00:22:10.270 "bdev_io_pool_size": 65535, 00:22:10.270 "bdev_io_cache_size": 256, 00:22:10.270 "bdev_auto_examine": true, 00:22:10.270 "iobuf_small_cache_size": 128, 00:22:10.270 "iobuf_large_cache_size": 16 00:22:10.270 } 00:22:10.270 }, 00:22:10.270 { 00:22:10.270 "method": "bdev_raid_set_options", 00:22:10.270 "params": { 00:22:10.270 "process_window_size_kb": 1024, 00:22:10.270 "process_max_bandwidth_mb_sec": 0 00:22:10.270 } 00:22:10.270 }, 00:22:10.270 { 00:22:10.270 "method": "bdev_iscsi_set_options", 00:22:10.270 "params": { 00:22:10.270 "timeout_sec": 30 00:22:10.270 } 00:22:10.270 }, 00:22:10.270 { 00:22:10.270 "method": "bdev_nvme_set_options", 00:22:10.270 "params": { 00:22:10.270 "action_on_timeout": "none", 00:22:10.270 "timeout_us": 0, 00:22:10.270 "timeout_admin_us": 0, 00:22:10.270 "keep_alive_timeout_ms": 10000, 00:22:10.270 "arbitration_burst": 0, 00:22:10.270 "low_priority_weight": 0, 00:22:10.270 "medium_priority_weight": 0, 00:22:10.270 "high_priority_weight": 0, 00:22:10.270 "nvme_adminq_poll_period_us": 10000, 00:22:10.270 "nvme_ioq_poll_period_us": 0, 00:22:10.270 "io_queue_requests": 0, 00:22:10.270 "delay_cmd_submit": true, 00:22:10.270 "transport_retry_count": 4, 00:22:10.270 "bdev_retry_count": 3, 00:22:10.270 "transport_ack_timeout": 0, 00:22:10.270 "ctrlr_loss_timeout_sec": 0, 00:22:10.270 "reconnect_delay_sec": 0, 00:22:10.270 "fast_io_fail_timeout_sec": 0, 00:22:10.270 "disable_auto_failback": false, 00:22:10.270 "generate_uuids": false, 00:22:10.270 "transport_tos": 0, 00:22:10.270 "nvme_error_stat": false, 00:22:10.270 "rdma_srq_size": 0, 00:22:10.270 "io_path_stat": false, 00:22:10.270 "allow_accel_sequence": false, 00:22:10.270 "rdma_max_cq_size": 0, 00:22:10.270 "rdma_cm_event_timeout_ms": 0, 00:22:10.270 "dhchap_digests": [ 00:22:10.270 "sha256", 00:22:10.270 "sha384", 00:22:10.270 "sha512" 00:22:10.270 ], 00:22:10.270 "dhchap_dhgroups": [ 00:22:10.270 "null", 00:22:10.270 "ffdhe2048", 00:22:10.270 "ffdhe3072", 00:22:10.270 "ffdhe4096", 00:22:10.270 "ffdhe6144", 00:22:10.270 "ffdhe8192" 00:22:10.270 ] 00:22:10.270 } 00:22:10.270 }, 00:22:10.270 { 00:22:10.270 "method": "bdev_nvme_set_hotplug", 00:22:10.270 "params": { 00:22:10.270 "period_us": 100000, 00:22:10.270 "enable": false 00:22:10.270 } 00:22:10.270 }, 00:22:10.270 { 00:22:10.270 "method": "bdev_malloc_create", 00:22:10.270 "params": { 00:22:10.270 "name": "malloc0", 00:22:10.270 "num_blocks": 8192, 00:22:10.270 "block_size": 4096, 00:22:10.270 "physical_block_size": 4096, 00:22:10.270 "uuid": "f2edde4c-3cae-4ea8-9097-f0842059c68d", 00:22:10.270 "optimal_io_boundary": 0, 00:22:10.270 "md_size": 0, 00:22:10.270 "dif_type": 0, 00:22:10.270 "dif_is_head_of_md": false, 00:22:10.270 "dif_pi_format": 0 00:22:10.270 } 00:22:10.270 }, 00:22:10.270 { 00:22:10.270 "method": "bdev_wait_for_examine" 00:22:10.270 } 00:22:10.270 ] 00:22:10.270 }, 00:22:10.270 { 00:22:10.270 "subsystem": "scsi", 00:22:10.270 "config": null 00:22:10.270 }, 00:22:10.270 { 00:22:10.270 "subsystem": "scheduler", 00:22:10.270 "config": [ 00:22:10.270 { 00:22:10.270 "method": "framework_set_scheduler", 00:22:10.270 "params": { 00:22:10.270 "name": "static" 00:22:10.270 } 00:22:10.270 } 00:22:10.270 ] 00:22:10.270 }, 00:22:10.270 { 00:22:10.270 "subsystem": "vhost_scsi", 00:22:10.270 "config": [] 00:22:10.270 }, 00:22:10.270 { 00:22:10.270 "subsystem": "vhost_blk", 00:22:10.270 "config": [] 00:22:10.270 }, 00:22:10.270 { 00:22:10.270 "subsystem": "ublk", 00:22:10.270 "config": [ 00:22:10.270 { 00:22:10.270 "method": "ublk_create_target", 00:22:10.270 "params": { 00:22:10.270 "cpumask": "1" 00:22:10.270 } 00:22:10.270 }, 00:22:10.270 { 00:22:10.270 "method": "ublk_start_disk", 00:22:10.270 "params": { 00:22:10.270 "bdev_name": "malloc0", 00:22:10.270 "ublk_id": 0, 00:22:10.270 "num_queues": 1, 00:22:10.270 "queue_depth": 128 00:22:10.270 } 00:22:10.270 } 00:22:10.270 ] 00:22:10.270 }, 00:22:10.270 { 00:22:10.270 "subsystem": "nbd", 00:22:10.270 "config": [] 00:22:10.270 }, 00:22:10.270 { 00:22:10.270 "subsystem": "nvmf", 00:22:10.270 "config": [ 00:22:10.270 { 00:22:10.270 "method": "nvmf_set_config", 00:22:10.270 "params": { 00:22:10.270 "discovery_filter": "match_any", 00:22:10.270 "admin_cmd_passthru": { 00:22:10.270 "identify_ctrlr": false 00:22:10.270 }, 00:22:10.270 "dhchap_digests": [ 00:22:10.270 "sha256", 00:22:10.270 "sha384", 00:22:10.270 "sha512" 00:22:10.270 ], 00:22:10.270 "dhchap_dhgroups": [ 00:22:10.270 "null", 00:22:10.270 "ffdhe2048", 00:22:10.270 "ffdhe3072", 00:22:10.270 "ffdhe4096", 00:22:10.270 "ffdhe6144", 00:22:10.270 "ffdhe8192" 00:22:10.270 ] 00:22:10.270 } 00:22:10.270 }, 00:22:10.270 { 00:22:10.270 "method": "nvmf_set_max_subsystems", 00:22:10.270 "params": { 00:22:10.270 "max_subsystems": 1024 00:22:10.270 } 00:22:10.270 }, 00:22:10.270 { 00:22:10.270 "method": "nvmf_set_crdt", 00:22:10.270 "params": { 00:22:10.270 "crdt1": 0, 00:22:10.270 "crdt2": 0, 00:22:10.270 "crdt3": 0 00:22:10.270 } 00:22:10.270 } 00:22:10.270 ] 00:22:10.270 }, 00:22:10.270 { 00:22:10.270 "subsystem": "iscsi", 00:22:10.270 "config": [ 00:22:10.270 { 00:22:10.270 "method": "iscsi_set_options", 00:22:10.270 "params": { 00:22:10.270 "node_base": "iqn.2016-06.io.spdk", 00:22:10.270 "max_sessions": 128, 00:22:10.270 "max_connections_per_session": 2, 00:22:10.270 "max_queue_depth": 64, 00:22:10.270 "default_time2wait": 2, 00:22:10.270 "default_time2retain": 20, 00:22:10.270 "first_burst_length": 8192, 00:22:10.270 "immediate_data": true, 00:22:10.270 "allow_duplicated_isid": false, 00:22:10.270 "error_recovery_level": 0, 00:22:10.270 "nop_timeout": 60, 00:22:10.270 "nop_in_interval": 30, 00:22:10.270 "disable_chap": false, 00:22:10.270 "require_chap": false, 00:22:10.270 "mutual_chap": false, 00:22:10.270 "chap_group": 0, 00:22:10.270 "max_large_datain_per_connection": 64, 00:22:10.270 "max_r2t_per_connection": 4, 00:22:10.270 "pdu_pool_size": 36864, 00:22:10.270 "immediate_data_pool_size": 16384, 00:22:10.270 "data_out_pool_size": 2048 00:22:10.270 } 00:22:10.270 } 00:22:10.270 ] 00:22:10.270 } 00:22:10.270 ] 00:22:10.270 }' 00:22:10.270 06:48:22 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 73753 00:22:10.270 06:48:22 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 73753 ']' 00:22:10.270 06:48:22 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 73753 00:22:10.270 06:48:22 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:22:10.270 06:48:22 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:10.270 06:48:22 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73753 00:22:10.270 killing process with pid 73753 00:22:10.270 06:48:22 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:10.270 06:48:22 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:10.270 06:48:22 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73753' 00:22:10.270 06:48:22 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 73753 00:22:10.270 06:48:22 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 73753 00:22:11.643 [2024-12-06 06:48:24.035699] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:22:21.628 [2024-12-06 06:48:32.769654] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:22:21.628 [2024-12-06 06:48:33.409568] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:22:22.192 [2024-12-06 06:48:34.625527] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:22:22.192 [2024-12-06 06:48:34.625593] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:22:22.192 [2024-12-06 06:48:34.625604] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:22:22.192 [2024-12-06 06:48:34.625631] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:22:22.192 [2024-12-06 06:48:34.625783] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:22:24.107 06:48:36 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=73994 00:22:24.107 06:48:36 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 73994 00:22:24.107 06:48:36 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 73994 ']' 00:22:24.107 06:48:36 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:24.107 06:48:36 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:24.107 06:48:36 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:22:24.107 06:48:36 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:24.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:24.107 06:48:36 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:24.107 06:48:36 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:22:24.107 06:48:36 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:22:24.107 "subsystems": [ 00:22:24.107 { 00:22:24.107 "subsystem": "fsdev", 00:22:24.107 "config": [ 00:22:24.107 { 00:22:24.107 "method": "fsdev_set_opts", 00:22:24.107 "params": { 00:22:24.107 "fsdev_io_pool_size": 65535, 00:22:24.107 "fsdev_io_cache_size": 256 00:22:24.107 } 00:22:24.107 } 00:22:24.107 ] 00:22:24.107 }, 00:22:24.107 { 00:22:24.107 "subsystem": "keyring", 00:22:24.107 "config": [] 00:22:24.107 }, 00:22:24.107 { 00:22:24.107 "subsystem": "iobuf", 00:22:24.107 "config": [ 00:22:24.107 { 00:22:24.107 "method": "iobuf_set_options", 00:22:24.107 "params": { 00:22:24.107 "small_pool_count": 8192, 00:22:24.107 "large_pool_count": 1024, 00:22:24.107 "small_bufsize": 8192, 00:22:24.107 "large_bufsize": 135168, 00:22:24.107 "enable_numa": false 00:22:24.107 } 00:22:24.107 } 00:22:24.107 ] 00:22:24.107 }, 00:22:24.107 { 00:22:24.107 "subsystem": "sock", 00:22:24.107 "config": [ 00:22:24.107 { 00:22:24.107 "method": "sock_set_default_impl", 00:22:24.107 "params": { 00:22:24.107 "impl_name": "posix" 00:22:24.107 } 00:22:24.107 }, 00:22:24.107 { 00:22:24.107 "method": "sock_impl_set_options", 00:22:24.107 "params": { 00:22:24.107 "impl_name": "ssl", 00:22:24.107 "recv_buf_size": 4096, 00:22:24.107 "send_buf_size": 4096, 00:22:24.107 "enable_recv_pipe": true, 00:22:24.107 "enable_quickack": false, 00:22:24.107 "enable_placement_id": 0, 00:22:24.107 "enable_zerocopy_send_server": true, 00:22:24.107 "enable_zerocopy_send_client": false, 00:22:24.107 "zerocopy_threshold": 0, 00:22:24.107 "tls_version": 0, 00:22:24.107 "enable_ktls": false 00:22:24.107 } 00:22:24.107 }, 00:22:24.107 { 00:22:24.107 "method": "sock_impl_set_options", 00:22:24.107 "params": { 00:22:24.107 "impl_name": "posix", 00:22:24.107 "recv_buf_size": 2097152, 00:22:24.107 "send_buf_size": 2097152, 00:22:24.107 "enable_recv_pipe": true, 00:22:24.107 "enable_quickack": false, 00:22:24.107 "enable_placement_id": 0, 00:22:24.107 "enable_zerocopy_send_server": true, 00:22:24.107 "enable_zerocopy_send_client": false, 00:22:24.107 "zerocopy_threshold": 0, 00:22:24.107 "tls_version": 0, 00:22:24.107 "enable_ktls": false 00:22:24.107 } 00:22:24.107 } 00:22:24.107 ] 00:22:24.107 }, 00:22:24.107 { 00:22:24.107 "subsystem": "vmd", 00:22:24.107 "config": [] 00:22:24.107 }, 00:22:24.107 { 00:22:24.107 "subsystem": "accel", 00:22:24.107 "config": [ 00:22:24.107 { 00:22:24.107 "method": "accel_set_options", 00:22:24.107 "params": { 00:22:24.107 "small_cache_size": 128, 00:22:24.107 "large_cache_size": 16, 00:22:24.107 "task_count": 2048, 00:22:24.107 "sequence_count": 2048, 00:22:24.107 "buf_count": 2048 00:22:24.107 } 00:22:24.107 } 00:22:24.107 ] 00:22:24.107 }, 00:22:24.107 { 00:22:24.107 "subsystem": "bdev", 00:22:24.107 "config": [ 00:22:24.107 { 00:22:24.107 "method": "bdev_set_options", 00:22:24.107 "params": { 00:22:24.107 "bdev_io_pool_size": 65535, 00:22:24.107 "bdev_io_cache_size": 256, 00:22:24.107 "bdev_auto_examine": true, 00:22:24.107 "iobuf_small_cache_size": 128, 00:22:24.107 "iobuf_large_cache_size": 16 00:22:24.107 } 00:22:24.107 }, 00:22:24.107 { 00:22:24.107 "method": "bdev_raid_set_options", 00:22:24.107 "params": { 00:22:24.107 "process_window_size_kb": 1024, 00:22:24.107 "process_max_bandwidth_mb_sec": 0 00:22:24.107 } 00:22:24.108 }, 00:22:24.108 { 00:22:24.108 "method": "bdev_iscsi_set_options", 00:22:24.108 "params": { 00:22:24.108 "timeout_sec": 30 00:22:24.108 } 00:22:24.108 }, 00:22:24.108 { 00:22:24.108 "method": "bdev_nvme_set_options", 00:22:24.108 "params": { 00:22:24.108 "action_on_timeout": "none", 00:22:24.108 "timeout_us": 0, 00:22:24.108 "timeout_admin_us": 0, 00:22:24.108 "keep_alive_timeout_ms": 10000, 00:22:24.108 "arbitration_burst": 0, 00:22:24.108 "low_priority_weight": 0, 00:22:24.108 "medium_priority_weight": 0, 00:22:24.108 "high_priority_weight": 0, 00:22:24.108 "nvme_adminq_poll_period_us": 10000, 00:22:24.108 "nvme_ioq_poll_period_us": 0, 00:22:24.108 "io_queue_requests": 0, 00:22:24.108 "delay_cmd_submit": true, 00:22:24.108 "transport_retry_count": 4, 00:22:24.108 "bdev_retry_count": 3, 00:22:24.108 "transport_ack_timeout": 0, 00:22:24.108 "ctrlr_loss_timeout_sec": 0, 00:22:24.108 "reconnect_delay_sec": 0, 00:22:24.108 "fast_io_fail_timeout_sec": 0, 00:22:24.108 "disable_auto_failback": false, 00:22:24.108 "generate_uuids": false, 00:22:24.108 "transport_tos": 0, 00:22:24.108 "nvme_error_stat": false, 00:22:24.108 "rdma_srq_size": 0, 00:22:24.108 "io_path_stat": false, 00:22:24.108 "allow_accel_sequence": false, 00:22:24.108 "rdma_max_cq_size": 0, 00:22:24.108 "rdma_cm_event_timeout_ms": 0, 00:22:24.108 "dhchap_digests": [ 00:22:24.108 "sha256", 00:22:24.108 "sha384", 00:22:24.108 "sha512" 00:22:24.108 ], 00:22:24.108 "dhchap_dhgroups": [ 00:22:24.108 "null", 00:22:24.108 "ffdhe2048", 00:22:24.108 "ffdhe3072", 00:22:24.108 "ffdhe4096", 00:22:24.108 "ffdhe6144", 00:22:24.108 "ffdhe8192" 00:22:24.108 ] 00:22:24.108 } 00:22:24.108 }, 00:22:24.108 { 00:22:24.108 "method": "bdev_nvme_set_hotplug", 00:22:24.108 "params": { 00:22:24.108 "period_us": 100000, 00:22:24.108 "enable": false 00:22:24.108 } 00:22:24.108 }, 00:22:24.108 { 00:22:24.108 "method": "bdev_malloc_create", 00:22:24.108 "params": { 00:22:24.108 "name": "malloc0", 00:22:24.108 "num_blocks": 8192, 00:22:24.108 "block_size": 4096, 00:22:24.108 "physical_block_size": 4096, 00:22:24.108 "uuid": "f2edde4c-3cae-4ea8-9097-f0842059c68d", 00:22:24.108 "optimal_io_boundary": 0, 00:22:24.108 "md_size": 0, 00:22:24.108 "dif_type": 0, 00:22:24.108 "dif_is_head_of_md": false, 00:22:24.108 "dif_pi_format": 0 00:22:24.108 } 00:22:24.108 }, 00:22:24.108 { 00:22:24.108 "method": "bdev_wait_for_examine" 00:22:24.108 } 00:22:24.108 ] 00:22:24.108 }, 00:22:24.108 { 00:22:24.108 "subsystem": "scsi", 00:22:24.108 "config": null 00:22:24.108 }, 00:22:24.108 { 00:22:24.108 "subsystem": "scheduler", 00:22:24.108 "config": [ 00:22:24.108 { 00:22:24.108 "method": "framework_set_scheduler", 00:22:24.108 "params": { 00:22:24.108 "name": "static" 00:22:24.108 } 00:22:24.108 } 00:22:24.108 ] 00:22:24.108 }, 00:22:24.108 { 00:22:24.108 "subsystem": "vhost_scsi", 00:22:24.108 "config": [] 00:22:24.108 }, 00:22:24.108 { 00:22:24.108 "subsystem": "vhost_blk", 00:22:24.108 "config": [] 00:22:24.108 }, 00:22:24.108 { 00:22:24.108 "subsystem": "ublk", 00:22:24.108 "config": [ 00:22:24.108 { 00:22:24.108 "method": "ublk_create_target", 00:22:24.108 "params": { 00:22:24.108 "cpumask": "1" 00:22:24.108 } 00:22:24.108 }, 00:22:24.108 { 00:22:24.108 "method": "ublk_start_disk", 00:22:24.108 "params": { 00:22:24.108 "bdev_name": "malloc0", 00:22:24.108 "ublk_id": 0, 00:22:24.108 "num_queues": 1, 00:22:24.108 "queue_depth": 128 00:22:24.108 } 00:22:24.108 } 00:22:24.108 ] 00:22:24.108 }, 00:22:24.108 { 00:22:24.108 "subsystem": "nbd", 00:22:24.108 "config": [] 00:22:24.108 }, 00:22:24.108 { 00:22:24.108 "subsystem": "nvmf", 00:22:24.108 "config": [ 00:22:24.108 { 00:22:24.108 "method": "nvmf_set_config", 00:22:24.108 "params": { 00:22:24.108 "discovery_filter": "match_any", 00:22:24.108 "admin_cmd_passthru": { 00:22:24.108 "identify_ctrlr": false 00:22:24.108 }, 00:22:24.108 "dhchap_digests": [ 00:22:24.108 "sha256", 00:22:24.108 "sha384", 00:22:24.108 "sha512" 00:22:24.108 ], 00:22:24.108 "dhchap_dhgroups": [ 00:22:24.108 "null", 00:22:24.108 "ffdhe2048", 00:22:24.108 "ffdhe3072", 00:22:24.108 "ffdhe4096", 00:22:24.108 "ffdhe6144", 00:22:24.108 "ffdhe8192" 00:22:24.108 ] 00:22:24.108 } 00:22:24.108 }, 00:22:24.108 { 00:22:24.108 "method": "nvmf_set_max_subsystems", 00:22:24.108 "params": { 00:22:24.108 "max_subsystems": 1024 00:22:24.108 } 00:22:24.108 }, 00:22:24.108 { 00:22:24.108 "method": "nvmf_set_crdt", 00:22:24.108 "params": { 00:22:24.108 "crdt1": 0, 00:22:24.108 "crdt2": 0, 00:22:24.108 "crdt3": 0 00:22:24.108 } 00:22:24.108 } 00:22:24.108 ] 00:22:24.108 }, 00:22:24.108 { 00:22:24.108 "subsystem": "iscsi", 00:22:24.108 "config": [ 00:22:24.108 { 00:22:24.108 "method": "iscsi_set_options", 00:22:24.108 "params": { 00:22:24.109 "node_base": "iqn.2016-06.io.spdk", 00:22:24.109 "max_sessions": 128, 00:22:24.109 "max_connections_per_session": 2, 00:22:24.109 "max_queue_depth": 64, 00:22:24.109 "default_time2wait": 2, 00:22:24.109 "default_time2retain": 20, 00:22:24.109 "first_burst_length": 8192, 00:22:24.109 "immediate_data": true, 00:22:24.109 "allow_duplicated_isid": false, 00:22:24.109 "error_recovery_level": 0, 00:22:24.109 "nop_timeout": 60, 00:22:24.109 "nop_in_interval": 30, 00:22:24.109 "disable_chap": false, 00:22:24.109 "require_chap": false, 00:22:24.109 "mutual_chap": false, 00:22:24.109 "chap_group": 0, 00:22:24.109 "max_large_datain_per_connection": 64, 00:22:24.109 "max_r2t_per_connection": 4, 00:22:24.109 "pdu_pool_size": 36864, 00:22:24.109 "immediate_data_pool_size": 16384, 00:22:24.109 "data_out_pool_size": 2048 00:22:24.109 } 00:22:24.109 } 00:22:24.109 ] 00:22:24.109 } 00:22:24.109 ] 00:22:24.109 }' 00:22:24.109 [2024-12-06 06:48:36.397477] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:22:24.109 [2024-12-06 06:48:36.397598] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73994 ] 00:22:24.109 [2024-12-06 06:48:36.554116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:24.109 [2024-12-06 06:48:36.633759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:24.673 [2024-12-06 06:48:37.286479] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:22:24.673 [2024-12-06 06:48:37.287123] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:22:24.673 [2024-12-06 06:48:37.294574] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:22:24.673 [2024-12-06 06:48:37.294634] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:22:24.673 [2024-12-06 06:48:37.294641] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:22:24.673 [2024-12-06 06:48:37.294647] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:22:24.673 [2024-12-06 06:48:37.303532] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:22:24.673 [2024-12-06 06:48:37.303548] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:22:24.673 [2024-12-06 06:48:37.310483] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:22:24.673 [2024-12-06 06:48:37.310556] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:22:24.673 [2024-12-06 06:48:37.327485] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:22:24.673 06:48:37 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:24.673 06:48:37 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:22:24.673 06:48:37 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:22:24.673 06:48:37 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.673 06:48:37 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:22:24.673 06:48:37 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:22:24.673 06:48:37 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.673 06:48:37 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:22:24.673 06:48:37 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:22:24.673 06:48:37 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 73994 00:22:24.673 06:48:37 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 73994 ']' 00:22:24.673 06:48:37 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 73994 00:22:24.673 06:48:37 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:22:24.673 06:48:37 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:24.931 06:48:37 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73994 00:22:24.931 06:48:37 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:24.931 06:48:37 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:24.931 killing process with pid 73994 00:22:24.931 06:48:37 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73994' 00:22:24.931 06:48:37 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 73994 00:22:24.931 06:48:37 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 73994 00:22:25.863 [2024-12-06 06:48:38.436437] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:22:25.863 [2024-12-06 06:48:38.475489] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:22:25.863 [2024-12-06 06:48:38.475596] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:22:25.863 [2024-12-06 06:48:38.486480] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:22:25.863 [2024-12-06 06:48:38.486528] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:22:25.863 [2024-12-06 06:48:38.486535] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:22:25.863 [2024-12-06 06:48:38.486556] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:22:25.863 [2024-12-06 06:48:38.486665] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:22:27.235 06:48:39 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:22:27.235 00:22:27.235 real 0m24.774s 00:22:27.235 user 0m2.468s 00:22:27.235 sys 0m1.421s 00:22:27.235 06:48:39 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:27.235 06:48:39 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:22:27.235 ************************************ 00:22:27.235 END TEST test_save_ublk_config 00:22:27.235 ************************************ 00:22:27.235 06:48:39 ublk -- ublk/ublk.sh@139 -- # spdk_pid=74062 00:22:27.235 06:48:39 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:27.235 06:48:39 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:22:27.235 06:48:39 ublk -- ublk/ublk.sh@141 -- # waitforlisten 74062 00:22:27.235 06:48:39 ublk -- common/autotest_common.sh@835 -- # '[' -z 74062 ']' 00:22:27.235 06:48:39 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:27.235 06:48:39 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:27.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:27.235 06:48:39 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:27.235 06:48:39 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:27.235 06:48:39 ublk -- common/autotest_common.sh@10 -- # set +x 00:22:27.235 [2024-12-06 06:48:39.946252] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:22:27.235 [2024-12-06 06:48:39.946352] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74062 ] 00:22:27.492 [2024-12-06 06:48:40.096265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:27.492 [2024-12-06 06:48:40.182201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:27.492 [2024-12-06 06:48:40.182316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:28.163 06:48:40 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:28.163 06:48:40 ublk -- common/autotest_common.sh@868 -- # return 0 00:22:28.163 06:48:40 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:22:28.163 06:48:40 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:28.163 06:48:40 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:28.163 06:48:40 ublk -- common/autotest_common.sh@10 -- # set +x 00:22:28.163 ************************************ 00:22:28.163 START TEST test_create_ublk 00:22:28.163 ************************************ 00:22:28.163 06:48:40 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:22:28.163 06:48:40 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:22:28.163 06:48:40 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.163 06:48:40 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:28.163 [2024-12-06 06:48:40.811479] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:22:28.163 [2024-12-06 06:48:40.813067] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:22:28.163 06:48:40 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.163 06:48:40 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:22:28.163 06:48:40 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:22:28.163 06:48:40 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.163 06:48:40 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:28.421 06:48:40 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.421 06:48:40 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:22:28.421 06:48:40 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:22:28.421 06:48:40 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.421 06:48:40 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:28.421 [2024-12-06 06:48:40.970590] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:22:28.421 [2024-12-06 06:48:40.970902] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:22:28.421 [2024-12-06 06:48:40.970911] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:22:28.421 [2024-12-06 06:48:40.970916] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:22:28.421 [2024-12-06 06:48:40.978501] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:22:28.421 [2024-12-06 06:48:40.978522] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:22:28.421 [2024-12-06 06:48:40.986484] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:22:28.421 [2024-12-06 06:48:40.987013] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:22:28.421 [2024-12-06 06:48:41.008499] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:22:28.421 06:48:41 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.421 06:48:41 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:22:28.421 06:48:41 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:22:28.421 06:48:41 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:22:28.421 06:48:41 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.421 06:48:41 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:28.421 06:48:41 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.421 06:48:41 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:22:28.421 { 00:22:28.421 "ublk_device": "/dev/ublkb0", 00:22:28.421 "id": 0, 00:22:28.421 "queue_depth": 512, 00:22:28.421 "num_queues": 4, 00:22:28.421 "bdev_name": "Malloc0" 00:22:28.421 } 00:22:28.421 ]' 00:22:28.421 06:48:41 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:22:28.422 06:48:41 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:22:28.422 06:48:41 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:22:28.422 06:48:41 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:22:28.422 06:48:41 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:22:28.422 06:48:41 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:22:28.422 06:48:41 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:22:28.679 06:48:41 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:22:28.679 06:48:41 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:22:28.679 06:48:41 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:22:28.679 06:48:41 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:22:28.679 06:48:41 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:22:28.679 06:48:41 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:22:28.679 06:48:41 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:22:28.679 06:48:41 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:22:28.679 06:48:41 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:22:28.679 06:48:41 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:22:28.679 06:48:41 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:22:28.679 06:48:41 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:22:28.679 06:48:41 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:22:28.679 06:48:41 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:22:28.679 06:48:41 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:22:28.679 fio: verification read phase will never start because write phase uses all of runtime 00:22:28.679 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:22:28.679 fio-3.35 00:22:28.679 Starting 1 process 00:22:40.874 00:22:40.874 fio_test: (groupid=0, jobs=1): err= 0: pid=74107: Fri Dec 6 06:48:51 2024 00:22:40.874 write: IOPS=19.5k, BW=76.3MiB/s (80.0MB/s)(763MiB/10001msec); 0 zone resets 00:22:40.874 clat (usec): min=34, max=4036, avg=50.33, stdev=85.00 00:22:40.874 lat (usec): min=35, max=4036, avg=50.82, stdev=85.02 00:22:40.874 clat percentiles (usec): 00:22:40.874 | 1.00th=[ 39], 5.00th=[ 41], 10.00th=[ 42], 20.00th=[ 43], 00:22:40.874 | 30.00th=[ 44], 40.00th=[ 45], 50.00th=[ 46], 60.00th=[ 47], 00:22:40.874 | 70.00th=[ 48], 80.00th=[ 51], 90.00th=[ 57], 95.00th=[ 61], 00:22:40.874 | 99.00th=[ 71], 99.50th=[ 82], 99.90th=[ 1352], 99.95th=[ 2573], 00:22:40.874 | 99.99th=[ 3589] 00:22:40.874 bw ( KiB/s): min=70192, max=83344, per=100.00%, avg=78207.58, stdev=3294.66, samples=19 00:22:40.874 iops : min=17548, max=20836, avg=19552.00, stdev=823.76, samples=19 00:22:40.874 lat (usec) : 50=78.50%, 100=21.15%, 250=0.19%, 500=0.03%, 750=0.01% 00:22:40.874 lat (usec) : 1000=0.01% 00:22:40.874 lat (msec) : 2=0.05%, 4=0.07%, 10=0.01% 00:22:40.874 cpu : usr=4.15%, sys=16.23%, ctx=195291, majf=0, minf=796 00:22:40.874 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:40.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:40.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:40.874 issued rwts: total=0,195297,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:40.874 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:40.874 00:22:40.874 Run status group 0 (all jobs): 00:22:40.874 WRITE: bw=76.3MiB/s (80.0MB/s), 76.3MiB/s-76.3MiB/s (80.0MB/s-80.0MB/s), io=763MiB (800MB), run=10001-10001msec 00:22:40.874 00:22:40.874 Disk stats (read/write): 00:22:40.874 ublkb0: ios=0/193362, merge=0/0, ticks=0/7968, in_queue=7968, util=99.08% 00:22:40.874 06:48:51 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:22:40.874 06:48:51 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.874 06:48:51 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:40.874 [2024-12-06 06:48:51.437084] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:22:40.874 [2024-12-06 06:48:51.464954] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:22:40.874 [2024-12-06 06:48:51.465888] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:22:40.874 [2024-12-06 06:48:51.472501] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:22:40.874 [2024-12-06 06:48:51.472725] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:22:40.874 [2024-12-06 06:48:51.472734] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:22:40.874 06:48:51 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.874 06:48:51 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:22:40.874 06:48:51 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:22:40.874 06:48:51 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:22:40.874 06:48:51 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:40.874 06:48:51 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:40.874 06:48:51 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:40.874 06:48:51 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:40.874 06:48:51 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:22:40.874 06:48:51 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.874 06:48:51 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:40.874 [2024-12-06 06:48:51.488541] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:22:40.874 request: 00:22:40.874 { 00:22:40.874 "ublk_id": 0, 00:22:40.874 "method": "ublk_stop_disk", 00:22:40.874 "req_id": 1 00:22:40.874 } 00:22:40.874 Got JSON-RPC error response 00:22:40.874 response: 00:22:40.874 { 00:22:40.874 "code": -19, 00:22:40.874 "message": "No such device" 00:22:40.874 } 00:22:40.874 06:48:51 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:40.874 06:48:51 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:22:40.874 06:48:51 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:40.874 06:48:51 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:40.874 06:48:51 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:40.874 06:48:51 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:22:40.874 06:48:51 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.874 06:48:51 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:40.874 [2024-12-06 06:48:51.504543] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:22:40.874 [2024-12-06 06:48:51.508292] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:22:40.874 [2024-12-06 06:48:51.508323] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:22:40.874 06:48:51 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.874 06:48:51 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:40.874 06:48:51 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.874 06:48:51 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:40.875 06:48:51 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.875 06:48:51 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:22:40.875 06:48:51 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:22:40.875 06:48:51 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.875 06:48:51 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:40.875 06:48:51 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.875 06:48:51 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:22:40.875 06:48:51 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:22:40.875 06:48:51 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:22:40.875 06:48:51 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:22:40.875 06:48:51 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.875 06:48:51 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:40.875 06:48:51 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.875 06:48:51 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:22:40.875 06:48:51 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:22:40.875 06:48:51 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:22:40.875 00:22:40.875 real 0m11.173s 00:22:40.875 user 0m0.723s 00:22:40.875 sys 0m1.711s 00:22:40.875 06:48:51 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:40.875 ************************************ 00:22:40.875 END TEST test_create_ublk 00:22:40.875 ************************************ 00:22:40.875 06:48:51 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:40.875 06:48:52 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:22:40.875 06:48:52 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:40.875 06:48:52 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:40.875 06:48:52 ublk -- common/autotest_common.sh@10 -- # set +x 00:22:40.875 ************************************ 00:22:40.875 START TEST test_create_multi_ublk 00:22:40.875 ************************************ 00:22:40.875 06:48:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:22:40.875 06:48:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:22:40.875 06:48:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.875 06:48:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:40.875 [2024-12-06 06:48:52.026475] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:22:40.875 [2024-12-06 06:48:52.028036] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:22:40.875 06:48:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.875 06:48:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:22:40.875 06:48:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:22:40.875 06:48:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:40.875 06:48:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:22:40.875 06:48:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.875 06:48:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:40.875 06:48:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.875 06:48:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:22:40.875 06:48:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:22:40.875 06:48:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.875 06:48:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:40.875 [2024-12-06 06:48:52.250581] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:22:40.875 [2024-12-06 06:48:52.250881] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:22:40.875 [2024-12-06 06:48:52.250888] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:22:40.875 [2024-12-06 06:48:52.250896] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:22:40.875 [2024-12-06 06:48:52.262512] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:22:40.875 [2024-12-06 06:48:52.262532] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:22:40.875 [2024-12-06 06:48:52.274480] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:22:40.875 [2024-12-06 06:48:52.274990] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:22:40.875 [2024-12-06 06:48:52.294504] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:22:40.875 06:48:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.875 06:48:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:22:40.875 06:48:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:40.875 06:48:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:22:40.875 06:48:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.875 06:48:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:40.875 06:48:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.875 06:48:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:22:40.875 06:48:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:22:40.875 06:48:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.875 06:48:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:40.875 [2024-12-06 06:48:52.533583] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:22:40.875 [2024-12-06 06:48:52.533886] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:22:40.875 [2024-12-06 06:48:52.533894] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:22:40.875 [2024-12-06 06:48:52.533899] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:22:40.875 [2024-12-06 06:48:52.545506] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:22:40.875 [2024-12-06 06:48:52.545522] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:22:40.875 [2024-12-06 06:48:52.557491] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:22:40.875 [2024-12-06 06:48:52.558009] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:22:40.875 [2024-12-06 06:48:52.597489] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:22:40.875 06:48:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.875 06:48:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:22:40.875 06:48:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:40.875 06:48:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:22:40.875 06:48:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.875 06:48:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:40.875 06:48:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.875 06:48:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:22:40.875 06:48:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:22:40.875 06:48:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.875 06:48:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:40.875 [2024-12-06 06:48:52.837577] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:22:40.875 [2024-12-06 06:48:52.837879] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:22:40.875 [2024-12-06 06:48:52.837886] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:22:40.875 [2024-12-06 06:48:52.837893] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:22:40.875 [2024-12-06 06:48:52.849495] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:22:40.875 [2024-12-06 06:48:52.849515] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:22:40.875 [2024-12-06 06:48:52.861480] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:22:40.875 [2024-12-06 06:48:52.861984] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:22:40.875 [2024-12-06 06:48:52.884492] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:22:40.875 06:48:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.875 06:48:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:22:40.875 06:48:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:40.875 06:48:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:22:40.875 06:48:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.875 06:48:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:40.875 06:48:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.875 06:48:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:22:40.875 06:48:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:22:40.875 06:48:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.875 06:48:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:40.875 [2024-12-06 06:48:53.125591] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:22:40.875 [2024-12-06 06:48:53.125886] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:22:40.875 [2024-12-06 06:48:53.125894] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:22:40.875 [2024-12-06 06:48:53.125899] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:22:40.875 [2024-12-06 06:48:53.137500] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:22:40.875 [2024-12-06 06:48:53.137517] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:22:40.875 [2024-12-06 06:48:53.149486] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:22:40.875 [2024-12-06 06:48:53.149989] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:22:40.875 [2024-12-06 06:48:53.162507] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:22:40.875 06:48:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.875 06:48:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:22:40.876 06:48:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:22:40.876 06:48:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.876 06:48:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:40.876 06:48:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.876 06:48:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:22:40.876 { 00:22:40.876 "ublk_device": "/dev/ublkb0", 00:22:40.876 "id": 0, 00:22:40.876 "queue_depth": 512, 00:22:40.876 "num_queues": 4, 00:22:40.876 "bdev_name": "Malloc0" 00:22:40.876 }, 00:22:40.876 { 00:22:40.876 "ublk_device": "/dev/ublkb1", 00:22:40.876 "id": 1, 00:22:40.876 "queue_depth": 512, 00:22:40.876 "num_queues": 4, 00:22:40.876 "bdev_name": "Malloc1" 00:22:40.876 }, 00:22:40.876 { 00:22:40.876 "ublk_device": "/dev/ublkb2", 00:22:40.876 "id": 2, 00:22:40.876 "queue_depth": 512, 00:22:40.876 "num_queues": 4, 00:22:40.876 "bdev_name": "Malloc2" 00:22:40.876 }, 00:22:40.876 { 00:22:40.876 "ublk_device": "/dev/ublkb3", 00:22:40.876 "id": 3, 00:22:40.876 "queue_depth": 512, 00:22:40.876 "num_queues": 4, 00:22:40.876 "bdev_name": "Malloc3" 00:22:40.876 } 00:22:40.876 ]' 00:22:40.876 06:48:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:22:40.876 06:48:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:40.876 06:48:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:22:40.876 06:48:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:22:40.876 06:48:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:22:40.876 06:48:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:22:40.876 06:48:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:22:40.876 06:48:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:22:40.876 06:48:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:22:40.876 06:48:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:22:40.876 06:48:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:22:40.876 06:48:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:22:40.876 06:48:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:40.876 06:48:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:22:40.876 06:48:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:22:40.876 06:48:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:22:40.876 06:48:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:22:40.876 06:48:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:22:40.876 06:48:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:22:40.876 06:48:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:22:40.876 06:48:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:22:40.876 06:48:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:22:40.876 06:48:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:22:40.876 06:48:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:40.876 06:48:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:22:40.876 06:48:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:22:40.876 06:48:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:22:40.876 06:48:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:22:40.876 06:48:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:22:41.134 06:48:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:22:41.134 06:48:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:22:41.134 06:48:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:22:41.134 06:48:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:22:41.134 06:48:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:22:41.134 06:48:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:41.134 06:48:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:22:41.134 06:48:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:22:41.134 06:48:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:22:41.134 06:48:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:22:41.134 06:48:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:22:41.134 06:48:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:22:41.134 06:48:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:22:41.134 06:48:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:22:41.135 06:48:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:22:41.135 06:48:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:22:41.135 06:48:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:22:41.135 06:48:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:22:41.135 06:48:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:41.135 06:48:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:22:41.135 06:48:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.135 06:48:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:41.135 [2024-12-06 06:48:53.841571] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:22:41.393 [2024-12-06 06:48:53.878499] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:22:41.393 [2024-12-06 06:48:53.879231] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:22:41.393 [2024-12-06 06:48:53.885482] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:22:41.393 [2024-12-06 06:48:53.885737] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:22:41.393 [2024-12-06 06:48:53.885747] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:22:41.393 06:48:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.393 06:48:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:41.393 06:48:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:22:41.393 06:48:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.393 06:48:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:41.393 [2024-12-06 06:48:53.893569] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:22:41.393 [2024-12-06 06:48:53.924949] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:22:41.393 [2024-12-06 06:48:53.925911] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:22:41.393 [2024-12-06 06:48:53.932487] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:22:41.393 [2024-12-06 06:48:53.932707] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:22:41.393 [2024-12-06 06:48:53.932715] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:22:41.393 06:48:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.393 06:48:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:41.393 06:48:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:22:41.393 06:48:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.393 06:48:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:41.393 [2024-12-06 06:48:53.946575] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:22:41.393 [2024-12-06 06:48:53.990511] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:22:41.393 [2024-12-06 06:48:53.991139] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:22:41.393 [2024-12-06 06:48:53.999489] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:22:41.393 [2024-12-06 06:48:53.999736] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:22:41.393 [2024-12-06 06:48:53.999748] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:22:41.393 06:48:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.393 06:48:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:41.393 06:48:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:22:41.393 06:48:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.393 06:48:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:41.393 [2024-12-06 06:48:54.011553] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:22:41.393 [2024-12-06 06:48:54.055947] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:22:41.393 [2024-12-06 06:48:54.056859] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:22:41.393 [2024-12-06 06:48:54.063486] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:22:41.393 [2024-12-06 06:48:54.063705] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:22:41.393 [2024-12-06 06:48:54.063712] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:22:41.393 06:48:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.393 06:48:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:22:41.651 [2024-12-06 06:48:54.255546] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:22:41.651 [2024-12-06 06:48:54.259220] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:22:41.651 [2024-12-06 06:48:54.259251] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:22:41.651 06:48:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:22:41.651 06:48:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:41.651 06:48:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:41.651 06:48:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.651 06:48:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:41.909 06:48:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.909 06:48:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:41.909 06:48:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:22:41.909 06:48:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.909 06:48:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:42.476 06:48:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.476 06:48:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:42.476 06:48:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:22:42.476 06:48:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.476 06:48:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:42.476 06:48:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.476 06:48:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:42.476 06:48:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:22:42.476 06:48:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.476 06:48:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:42.734 06:48:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.734 06:48:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:22:42.734 06:48:55 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:22:42.734 06:48:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.734 06:48:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:42.734 06:48:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.734 06:48:55 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:22:42.734 06:48:55 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:22:42.734 06:48:55 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:22:42.734 06:48:55 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:22:42.734 06:48:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.734 06:48:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:42.734 06:48:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.734 06:48:55 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:22:42.734 06:48:55 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:22:42.992 06:48:55 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:22:42.992 00:22:42.992 real 0m3.465s 00:22:42.992 user 0m0.825s 00:22:42.992 sys 0m0.138s 00:22:42.992 06:48:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:42.992 ************************************ 00:22:42.992 END TEST test_create_multi_ublk 00:22:42.992 ************************************ 00:22:42.992 06:48:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:42.992 06:48:55 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:22:42.992 06:48:55 ublk -- ublk/ublk.sh@147 -- # cleanup 00:22:42.992 06:48:55 ublk -- ublk/ublk.sh@130 -- # killprocess 74062 00:22:42.992 06:48:55 ublk -- common/autotest_common.sh@954 -- # '[' -z 74062 ']' 00:22:42.992 06:48:55 ublk -- common/autotest_common.sh@958 -- # kill -0 74062 00:22:42.992 06:48:55 ublk -- common/autotest_common.sh@959 -- # uname 00:22:42.992 06:48:55 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:42.992 06:48:55 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74062 00:22:42.992 06:48:55 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:42.992 killing process with pid 74062 00:22:42.992 06:48:55 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:42.992 06:48:55 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74062' 00:22:42.992 06:48:55 ublk -- common/autotest_common.sh@973 -- # kill 74062 00:22:42.992 06:48:55 ublk -- common/autotest_common.sh@978 -- # wait 74062 00:22:43.558 [2024-12-06 06:48:56.107839] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:22:43.558 [2024-12-06 06:48:56.107885] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:22:44.124 00:22:44.124 real 0m41.828s 00:22:44.124 user 0m32.629s 00:22:44.124 sys 0m8.354s 00:22:44.124 06:48:56 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:44.124 06:48:56 ublk -- common/autotest_common.sh@10 -- # set +x 00:22:44.124 ************************************ 00:22:44.124 END TEST ublk 00:22:44.124 ************************************ 00:22:44.124 06:48:56 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:22:44.124 06:48:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:44.124 06:48:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:44.124 06:48:56 -- common/autotest_common.sh@10 -- # set +x 00:22:44.124 ************************************ 00:22:44.124 START TEST ublk_recovery 00:22:44.124 ************************************ 00:22:44.124 06:48:56 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:22:44.124 * Looking for test storage... 00:22:44.124 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:22:44.124 06:48:56 ublk_recovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:44.124 06:48:56 ublk_recovery -- common/autotest_common.sh@1711 -- # lcov --version 00:22:44.124 06:48:56 ublk_recovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:44.382 06:48:56 ublk_recovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:44.382 06:48:56 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:44.382 06:48:56 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:44.382 06:48:56 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:44.382 06:48:56 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:22:44.382 06:48:56 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:22:44.382 06:48:56 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:22:44.382 06:48:56 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:22:44.382 06:48:56 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:22:44.382 06:48:56 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:22:44.382 06:48:56 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:22:44.382 06:48:56 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:44.382 06:48:56 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:22:44.382 06:48:56 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:22:44.382 06:48:56 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:44.382 06:48:56 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:44.382 06:48:56 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:22:44.382 06:48:56 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:22:44.382 06:48:56 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:44.382 06:48:56 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:22:44.382 06:48:56 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:22:44.382 06:48:56 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:22:44.382 06:48:56 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:22:44.382 06:48:56 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:44.382 06:48:56 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:22:44.382 06:48:56 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:22:44.382 06:48:56 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:44.382 06:48:56 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:44.382 06:48:56 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:22:44.382 06:48:56 ublk_recovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:44.382 06:48:56 ublk_recovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:44.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:44.382 --rc genhtml_branch_coverage=1 00:22:44.382 --rc genhtml_function_coverage=1 00:22:44.382 --rc genhtml_legend=1 00:22:44.382 --rc geninfo_all_blocks=1 00:22:44.382 --rc geninfo_unexecuted_blocks=1 00:22:44.382 00:22:44.382 ' 00:22:44.382 06:48:56 ublk_recovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:44.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:44.382 --rc genhtml_branch_coverage=1 00:22:44.382 --rc genhtml_function_coverage=1 00:22:44.382 --rc genhtml_legend=1 00:22:44.382 --rc geninfo_all_blocks=1 00:22:44.382 --rc geninfo_unexecuted_blocks=1 00:22:44.382 00:22:44.382 ' 00:22:44.382 06:48:56 ublk_recovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:44.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:44.382 --rc genhtml_branch_coverage=1 00:22:44.382 --rc genhtml_function_coverage=1 00:22:44.382 --rc genhtml_legend=1 00:22:44.382 --rc geninfo_all_blocks=1 00:22:44.382 --rc geninfo_unexecuted_blocks=1 00:22:44.382 00:22:44.382 ' 00:22:44.382 06:48:56 ublk_recovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:44.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:44.382 --rc genhtml_branch_coverage=1 00:22:44.382 --rc genhtml_function_coverage=1 00:22:44.382 --rc genhtml_legend=1 00:22:44.382 --rc geninfo_all_blocks=1 00:22:44.382 --rc geninfo_unexecuted_blocks=1 00:22:44.382 00:22:44.382 ' 00:22:44.382 06:48:56 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:22:44.382 06:48:56 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:22:44.382 06:48:56 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:22:44.382 06:48:56 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:22:44.382 06:48:56 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:22:44.382 06:48:56 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:22:44.382 06:48:56 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:22:44.382 06:48:56 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:22:44.382 06:48:56 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:22:44.382 06:48:56 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:22:44.382 06:48:56 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=74454 00:22:44.382 06:48:56 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:44.382 06:48:56 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:22:44.382 06:48:56 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 74454 00:22:44.382 06:48:56 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 74454 ']' 00:22:44.383 06:48:56 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:44.383 06:48:56 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:44.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:44.383 06:48:56 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:44.383 06:48:56 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:44.383 06:48:56 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:44.383 [2024-12-06 06:48:57.001934] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:22:44.383 [2024-12-06 06:48:57.002056] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74454 ] 00:22:44.640 [2024-12-06 06:48:57.157794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:44.640 [2024-12-06 06:48:57.238380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:44.640 [2024-12-06 06:48:57.238413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:45.206 06:48:57 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:45.206 06:48:57 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:22:45.206 06:48:57 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:22:45.206 06:48:57 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.206 06:48:57 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:45.206 [2024-12-06 06:48:57.834479] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:22:45.206 [2024-12-06 06:48:57.836016] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:22:45.206 06:48:57 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.206 06:48:57 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:22:45.206 06:48:57 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.206 06:48:57 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:45.206 malloc0 00:22:45.206 06:48:57 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.206 06:48:57 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:22:45.206 06:48:57 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.206 06:48:57 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:45.206 [2024-12-06 06:48:57.914580] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:22:45.206 [2024-12-06 06:48:57.914661] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:22:45.206 [2024-12-06 06:48:57.914669] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:22:45.206 [2024-12-06 06:48:57.914675] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:22:45.206 [2024-12-06 06:48:57.923561] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:22:45.206 [2024-12-06 06:48:57.923580] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:22:45.206 [2024-12-06 06:48:57.930485] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:22:45.206 [2024-12-06 06:48:57.930598] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:22:45.464 [2024-12-06 06:48:57.951493] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:22:45.464 1 00:22:45.464 06:48:57 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.464 06:48:57 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:22:46.397 06:48:58 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=74488 00:22:46.397 06:48:58 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:22:46.397 06:48:58 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:22:46.397 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:22:46.397 fio-3.35 00:22:46.397 Starting 1 process 00:22:51.658 06:49:03 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 74454 00:22:51.658 06:49:03 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:22:56.919 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 74454 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:22:56.919 06:49:08 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=74598 00:22:56.919 06:49:08 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:56.919 06:49:08 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:22:56.919 06:49:08 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 74598 00:22:56.919 06:49:08 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 74598 ']' 00:22:56.919 06:49:08 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:56.919 06:49:08 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:56.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:56.919 06:49:08 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:56.919 06:49:08 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:56.919 06:49:08 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:56.919 [2024-12-06 06:49:09.050046] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:22:56.919 [2024-12-06 06:49:09.050161] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74598 ] 00:22:56.919 [2024-12-06 06:49:09.207977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:56.919 [2024-12-06 06:49:09.305996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:56.919 [2024-12-06 06:49:09.306084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:57.177 06:49:09 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:57.177 06:49:09 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:22:57.177 06:49:09 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:22:57.177 06:49:09 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.177 06:49:09 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:57.177 [2024-12-06 06:49:09.895483] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:22:57.177 [2024-12-06 06:49:09.897325] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:22:57.177 06:49:09 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.177 06:49:09 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:22:57.177 06:49:09 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.177 06:49:09 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:57.434 malloc0 00:22:57.434 06:49:09 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.434 06:49:09 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:22:57.434 06:49:09 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.434 06:49:09 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:57.434 [2024-12-06 06:49:09.999607] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:22:57.434 [2024-12-06 06:49:09.999647] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:22:57.434 [2024-12-06 06:49:09.999661] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:22:57.434 [2024-12-06 06:49:10.007522] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:22:57.434 [2024-12-06 06:49:10.007545] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:22:57.434 [2024-12-06 06:49:10.007553] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:22:57.434 [2024-12-06 06:49:10.007623] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:22:57.434 1 00:22:57.434 06:49:10 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.434 06:49:10 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 74488 00:22:57.434 [2024-12-06 06:49:10.015500] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:22:57.434 [2024-12-06 06:49:10.021377] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:22:57.434 [2024-12-06 06:49:10.027667] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:22:57.434 [2024-12-06 06:49:10.027686] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:23:53.641 00:23:53.641 fio_test: (groupid=0, jobs=1): err= 0: pid=74491: Fri Dec 6 06:49:59 2024 00:23:53.641 read: IOPS=27.3k, BW=107MiB/s (112MB/s)(6408MiB/60002msec) 00:23:53.641 slat (nsec): min=875, max=317457, avg=4855.90, stdev=1645.44 00:23:53.641 clat (usec): min=628, max=6070.9k, avg=2284.36, stdev=36700.32 00:23:53.641 lat (usec): min=633, max=6070.9k, avg=2289.22, stdev=36700.31 00:23:53.641 clat percentiles (usec): 00:23:53.641 | 1.00th=[ 1680], 5.00th=[ 1811], 10.00th=[ 1844], 20.00th=[ 1893], 00:23:53.641 | 30.00th=[ 1909], 40.00th=[ 1926], 50.00th=[ 1942], 60.00th=[ 1975], 00:23:53.641 | 70.00th=[ 1991], 80.00th=[ 2024], 90.00th=[ 2147], 95.00th=[ 2868], 00:23:53.641 | 99.00th=[ 4752], 99.50th=[ 5407], 99.90th=[ 6718], 99.95th=[ 7373], 00:23:53.641 | 99.99th=[12518] 00:23:53.641 bw ( KiB/s): min=15000, max=129496, per=100.00%, avg=120466.87, stdev=14353.83, samples=108 00:23:53.641 iops : min= 3750, max=32372, avg=30116.71, stdev=3588.45, samples=108 00:23:53.641 write: IOPS=27.3k, BW=107MiB/s (112MB/s)(6402MiB/60002msec); 0 zone resets 00:23:53.641 slat (nsec): min=944, max=871653, avg=4894.50, stdev=1815.39 00:23:53.641 clat (usec): min=631, max=6070.7k, avg=2388.88, stdev=39086.90 00:23:53.641 lat (usec): min=645, max=6070.7k, avg=2393.77, stdev=39086.89 00:23:53.641 clat percentiles (usec): 00:23:53.641 | 1.00th=[ 1713], 5.00th=[ 1893], 10.00th=[ 1926], 20.00th=[ 1975], 00:23:53.641 | 30.00th=[ 1991], 40.00th=[ 2024], 50.00th=[ 2040], 60.00th=[ 2057], 00:23:53.641 | 70.00th=[ 2073], 80.00th=[ 2114], 90.00th=[ 2245], 95.00th=[ 2802], 00:23:53.641 | 99.00th=[ 4752], 99.50th=[ 5473], 99.90th=[ 6652], 99.95th=[ 7439], 00:23:53.641 | 99.99th=[12649] 00:23:53.641 bw ( KiB/s): min=14536, max=129064, per=100.00%, avg=120364.30, stdev=14456.01, samples=108 00:23:53.641 iops : min= 3634, max=32266, avg=30091.07, stdev=3614.00, samples=108 00:23:53.641 lat (usec) : 750=0.01%, 1000=0.01% 00:23:53.641 lat (msec) : 2=53.10%, 4=44.65%, 10=2.23%, 20=0.01%, >=2000=0.01% 00:23:53.641 cpu : usr=6.23%, sys=26.88%, ctx=112401, majf=0, minf=13 00:23:53.641 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:23:53.641 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:53.641 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:53.641 issued rwts: total=1640398,1638986,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:53.641 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:53.641 00:23:53.641 Run status group 0 (all jobs): 00:23:53.641 READ: bw=107MiB/s (112MB/s), 107MiB/s-107MiB/s (112MB/s-112MB/s), io=6408MiB (6719MB), run=60002-60002msec 00:23:53.642 WRITE: bw=107MiB/s (112MB/s), 107MiB/s-107MiB/s (112MB/s-112MB/s), io=6402MiB (6713MB), run=60002-60002msec 00:23:53.642 00:23:53.642 Disk stats (read/write): 00:23:53.642 ublkb1: ios=1637063/1635695, merge=0/0, ticks=3656811/3696252, in_queue=7353064, util=99.90% 00:23:53.642 06:49:59 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:23:53.642 06:49:59 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.642 06:49:59 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.642 [2024-12-06 06:49:59.227017] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:23:53.642 [2024-12-06 06:49:59.267598] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:23:53.642 [2024-12-06 06:49:59.267724] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:23:53.642 [2024-12-06 06:49:59.278493] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:23:53.642 [2024-12-06 06:49:59.278609] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:23:53.642 [2024-12-06 06:49:59.278620] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:23:53.642 06:49:59 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.642 06:49:59 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:23:53.642 06:49:59 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.642 06:49:59 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.642 [2024-12-06 06:49:59.294553] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:23:53.642 [2024-12-06 06:49:59.298224] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:23:53.642 [2024-12-06 06:49:59.298252] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:23:53.642 06:49:59 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.642 06:49:59 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:23:53.642 06:49:59 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:23:53.642 06:49:59 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 74598 00:23:53.642 06:49:59 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 74598 ']' 00:23:53.642 06:49:59 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 74598 00:23:53.642 06:49:59 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:23:53.642 06:49:59 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:53.642 06:49:59 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74598 00:23:53.642 06:49:59 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:53.642 06:49:59 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:53.642 06:49:59 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74598' 00:23:53.642 killing process with pid 74598 00:23:53.642 06:49:59 ublk_recovery -- common/autotest_common.sh@973 -- # kill 74598 00:23:53.642 06:49:59 ublk_recovery -- common/autotest_common.sh@978 -- # wait 74598 00:23:53.642 [2024-12-06 06:50:00.371240] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:23:53.642 [2024-12-06 06:50:00.371276] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:23:53.642 00:23:53.642 real 1m4.309s 00:23:53.642 user 1m47.443s 00:23:53.642 sys 0m30.334s 00:23:53.642 06:50:01 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:53.642 ************************************ 00:23:53.642 END TEST ublk_recovery 00:23:53.642 ************************************ 00:23:53.642 06:50:01 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.642 06:50:01 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:23:53.642 06:50:01 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:23:53.642 06:50:01 -- spdk/autotest.sh@260 -- # timing_exit lib 00:23:53.642 06:50:01 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:53.642 06:50:01 -- common/autotest_common.sh@10 -- # set +x 00:23:53.642 06:50:01 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:23:53.642 06:50:01 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:23:53.642 06:50:01 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:23:53.642 06:50:01 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:23:53.642 06:50:01 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:23:53.642 06:50:01 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:23:53.642 06:50:01 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:23:53.642 06:50:01 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:23:53.642 06:50:01 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:23:53.642 06:50:01 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:23:53.642 06:50:01 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:23:53.642 06:50:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:53.642 06:50:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:53.642 06:50:01 -- common/autotest_common.sh@10 -- # set +x 00:23:53.642 ************************************ 00:23:53.642 START TEST ftl 00:23:53.642 ************************************ 00:23:53.642 06:50:01 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:23:53.642 * Looking for test storage... 00:23:53.642 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:53.642 06:50:01 ftl -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:53.642 06:50:01 ftl -- common/autotest_common.sh@1711 -- # lcov --version 00:23:53.642 06:50:01 ftl -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:53.642 06:50:01 ftl -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:53.642 06:50:01 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:53.642 06:50:01 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:53.642 06:50:01 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:53.642 06:50:01 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:23:53.642 06:50:01 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:23:53.642 06:50:01 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:23:53.642 06:50:01 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:23:53.642 06:50:01 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:23:53.642 06:50:01 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:23:53.642 06:50:01 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:23:53.642 06:50:01 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:53.642 06:50:01 ftl -- scripts/common.sh@344 -- # case "$op" in 00:23:53.642 06:50:01 ftl -- scripts/common.sh@345 -- # : 1 00:23:53.642 06:50:01 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:53.642 06:50:01 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:53.642 06:50:01 ftl -- scripts/common.sh@365 -- # decimal 1 00:23:53.642 06:50:01 ftl -- scripts/common.sh@353 -- # local d=1 00:23:53.642 06:50:01 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:53.642 06:50:01 ftl -- scripts/common.sh@355 -- # echo 1 00:23:53.642 06:50:01 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:23:53.642 06:50:01 ftl -- scripts/common.sh@366 -- # decimal 2 00:23:53.642 06:50:01 ftl -- scripts/common.sh@353 -- # local d=2 00:23:53.642 06:50:01 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:53.642 06:50:01 ftl -- scripts/common.sh@355 -- # echo 2 00:23:53.642 06:50:01 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:23:53.642 06:50:01 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:53.642 06:50:01 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:53.642 06:50:01 ftl -- scripts/common.sh@368 -- # return 0 00:23:53.642 06:50:01 ftl -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:53.642 06:50:01 ftl -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:53.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.642 --rc genhtml_branch_coverage=1 00:23:53.642 --rc genhtml_function_coverage=1 00:23:53.642 --rc genhtml_legend=1 00:23:53.642 --rc geninfo_all_blocks=1 00:23:53.642 --rc geninfo_unexecuted_blocks=1 00:23:53.642 00:23:53.642 ' 00:23:53.642 06:50:01 ftl -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:53.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.642 --rc genhtml_branch_coverage=1 00:23:53.642 --rc genhtml_function_coverage=1 00:23:53.642 --rc genhtml_legend=1 00:23:53.642 --rc geninfo_all_blocks=1 00:23:53.642 --rc geninfo_unexecuted_blocks=1 00:23:53.642 00:23:53.642 ' 00:23:53.642 06:50:01 ftl -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:53.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.642 --rc genhtml_branch_coverage=1 00:23:53.642 --rc genhtml_function_coverage=1 00:23:53.642 --rc genhtml_legend=1 00:23:53.642 --rc geninfo_all_blocks=1 00:23:53.642 --rc geninfo_unexecuted_blocks=1 00:23:53.642 00:23:53.642 ' 00:23:53.642 06:50:01 ftl -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:53.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.642 --rc genhtml_branch_coverage=1 00:23:53.642 --rc genhtml_function_coverage=1 00:23:53.642 --rc genhtml_legend=1 00:23:53.642 --rc geninfo_all_blocks=1 00:23:53.642 --rc geninfo_unexecuted_blocks=1 00:23:53.642 00:23:53.642 ' 00:23:53.642 06:50:01 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:53.642 06:50:01 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:23:53.642 06:50:01 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:53.642 06:50:01 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:53.642 06:50:01 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:53.642 06:50:01 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:53.642 06:50:01 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:53.642 06:50:01 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:53.642 06:50:01 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:53.642 06:50:01 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:53.642 06:50:01 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:53.642 06:50:01 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:53.642 06:50:01 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:53.642 06:50:01 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:53.642 06:50:01 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:53.642 06:50:01 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:53.643 06:50:01 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:53.643 06:50:01 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:53.643 06:50:01 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:53.643 06:50:01 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:53.643 06:50:01 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:53.643 06:50:01 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:53.643 06:50:01 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:53.643 06:50:01 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:53.643 06:50:01 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:53.643 06:50:01 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:53.643 06:50:01 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:53.643 06:50:01 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:53.643 06:50:01 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:53.643 06:50:01 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:53.643 06:50:01 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:23:53.643 06:50:01 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:23:53.643 06:50:01 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:23:53.643 06:50:01 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:23:53.643 06:50:01 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:53.643 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:53.643 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:53.643 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:53.643 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:53.643 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:53.643 06:50:01 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=75398 00:23:53.643 06:50:01 ftl -- ftl/ftl.sh@38 -- # waitforlisten 75398 00:23:53.643 06:50:01 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:23:53.643 06:50:01 ftl -- common/autotest_common.sh@835 -- # '[' -z 75398 ']' 00:23:53.643 06:50:01 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:53.643 06:50:01 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:53.643 06:50:01 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:53.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:53.643 06:50:01 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:53.643 06:50:01 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:53.643 [2024-12-06 06:50:01.858812] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:23:53.643 [2024-12-06 06:50:01.859093] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75398 ] 00:23:53.643 [2024-12-06 06:50:02.014959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:53.643 [2024-12-06 06:50:02.097560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:53.643 06:50:02 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:53.643 06:50:02 ftl -- common/autotest_common.sh@868 -- # return 0 00:23:53.643 06:50:02 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:23:53.643 06:50:02 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:23:53.643 06:50:03 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:23:53.643 06:50:03 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:23:53.643 06:50:03 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:23:53.643 06:50:04 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:23:53.643 06:50:04 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:23:53.643 06:50:04 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:23:53.643 06:50:04 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:23:53.643 06:50:04 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:23:53.643 06:50:04 ftl -- ftl/ftl.sh@50 -- # break 00:23:53.643 06:50:04 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:23:53.643 06:50:04 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:23:53.643 06:50:04 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:23:53.643 06:50:04 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:23:53.643 06:50:04 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:23:53.643 06:50:04 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:23:53.643 06:50:04 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:23:53.643 06:50:04 ftl -- ftl/ftl.sh@63 -- # break 00:23:53.643 06:50:04 ftl -- ftl/ftl.sh@66 -- # killprocess 75398 00:23:53.643 06:50:04 ftl -- common/autotest_common.sh@954 -- # '[' -z 75398 ']' 00:23:53.643 06:50:04 ftl -- common/autotest_common.sh@958 -- # kill -0 75398 00:23:53.643 06:50:04 ftl -- common/autotest_common.sh@959 -- # uname 00:23:53.643 06:50:04 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:53.643 06:50:04 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75398 00:23:53.643 06:50:04 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:53.643 06:50:04 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:53.643 killing process with pid 75398 00:23:53.643 06:50:04 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75398' 00:23:53.643 06:50:04 ftl -- common/autotest_common.sh@973 -- # kill 75398 00:23:53.643 06:50:04 ftl -- common/autotest_common.sh@978 -- # wait 75398 00:23:53.643 06:50:05 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:23:53.643 06:50:05 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:23:53.643 06:50:05 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:23:53.643 06:50:05 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:53.643 06:50:05 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:53.643 ************************************ 00:23:53.643 START TEST ftl_fio_basic 00:23:53.643 ************************************ 00:23:53.643 06:50:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:23:53.643 * Looking for test storage... 00:23:53.643 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:53.643 06:50:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:53.643 06:50:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lcov --version 00:23:53.643 06:50:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:53.643 06:50:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:53.643 06:50:05 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:53.643 06:50:05 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:53.643 06:50:05 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:53.643 06:50:05 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:23:53.643 06:50:05 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:23:53.643 06:50:05 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:23:53.643 06:50:05 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:23:53.643 06:50:05 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:23:53.643 06:50:05 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:23:53.643 06:50:05 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:23:53.643 06:50:05 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:53.643 06:50:05 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:23:53.643 06:50:05 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:23:53.643 06:50:05 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:53.643 06:50:05 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:53.643 06:50:05 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:23:53.643 06:50:05 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:23:53.643 06:50:05 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:53.643 06:50:05 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:23:53.643 06:50:05 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:23:53.643 06:50:05 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:23:53.643 06:50:05 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:23:53.643 06:50:05 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:53.643 06:50:05 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:23:53.643 06:50:05 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:23:53.643 06:50:05 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:53.643 06:50:05 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:53.643 06:50:05 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:23:53.643 06:50:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:53.643 06:50:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:53.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.643 --rc genhtml_branch_coverage=1 00:23:53.643 --rc genhtml_function_coverage=1 00:23:53.643 --rc genhtml_legend=1 00:23:53.643 --rc geninfo_all_blocks=1 00:23:53.644 --rc geninfo_unexecuted_blocks=1 00:23:53.644 00:23:53.644 ' 00:23:53.644 06:50:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:53.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.644 --rc genhtml_branch_coverage=1 00:23:53.644 --rc genhtml_function_coverage=1 00:23:53.644 --rc genhtml_legend=1 00:23:53.644 --rc geninfo_all_blocks=1 00:23:53.644 --rc geninfo_unexecuted_blocks=1 00:23:53.644 00:23:53.644 ' 00:23:53.644 06:50:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:53.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.644 --rc genhtml_branch_coverage=1 00:23:53.644 --rc genhtml_function_coverage=1 00:23:53.644 --rc genhtml_legend=1 00:23:53.644 --rc geninfo_all_blocks=1 00:23:53.644 --rc geninfo_unexecuted_blocks=1 00:23:53.644 00:23:53.644 ' 00:23:53.644 06:50:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:53.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.644 --rc genhtml_branch_coverage=1 00:23:53.644 --rc genhtml_function_coverage=1 00:23:53.644 --rc genhtml_legend=1 00:23:53.644 --rc geninfo_all_blocks=1 00:23:53.644 --rc geninfo_unexecuted_blocks=1 00:23:53.644 00:23:53.644 ' 00:23:53.644 06:50:05 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:53.644 06:50:05 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:23:53.644 06:50:05 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:53.644 06:50:05 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:53.644 06:50:05 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:53.644 06:50:05 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:53.644 06:50:05 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:53.644 06:50:05 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:53.644 06:50:05 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:53.644 06:50:05 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:53.644 06:50:05 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:53.644 06:50:05 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:53.644 06:50:05 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:53.644 06:50:05 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:53.644 06:50:05 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:53.644 06:50:05 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:53.644 06:50:05 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:53.644 06:50:05 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:53.644 06:50:05 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:53.644 06:50:05 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:53.644 06:50:05 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:53.644 06:50:05 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:53.644 06:50:05 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:53.644 06:50:05 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:53.644 06:50:05 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:53.644 06:50:05 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:53.644 06:50:05 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:53.644 06:50:05 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:53.644 06:50:05 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:53.644 06:50:05 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:23:53.644 06:50:05 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:23:53.644 06:50:05 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:23:53.644 06:50:05 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:23:53.644 06:50:05 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:53.644 06:50:05 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:23:53.644 06:50:05 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:23:53.644 06:50:05 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:23:53.644 06:50:05 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:23:53.644 06:50:05 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:23:53.644 06:50:05 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:23:53.644 06:50:05 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:23:53.644 06:50:05 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:23:53.644 06:50:05 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:23:53.644 06:50:05 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:53.644 06:50:05 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:53.644 06:50:05 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:23:53.644 06:50:05 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=75525 00:23:53.644 06:50:05 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 75525 00:23:53.644 06:50:05 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:23:53.644 06:50:05 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 75525 ']' 00:23:53.644 06:50:05 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:53.644 06:50:05 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:53.644 06:50:05 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:53.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:53.644 06:50:05 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:53.644 06:50:05 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:23:53.644 [2024-12-06 06:50:05.880267] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:23:53.644 [2024-12-06 06:50:05.880376] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75525 ] 00:23:53.644 [2024-12-06 06:50:06.038087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:53.644 [2024-12-06 06:50:06.138900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:53.644 [2024-12-06 06:50:06.139487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:53.644 [2024-12-06 06:50:06.139535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:54.209 06:50:06 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:54.209 06:50:06 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:23:54.209 06:50:06 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:23:54.209 06:50:06 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:23:54.209 06:50:06 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:54.209 06:50:06 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:23:54.209 06:50:06 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:23:54.209 06:50:06 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:23:54.466 06:50:06 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:23:54.466 06:50:06 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:23:54.466 06:50:06 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:23:54.466 06:50:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:23:54.466 06:50:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:54.466 06:50:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:23:54.466 06:50:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:23:54.466 06:50:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:23:54.466 06:50:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:54.466 { 00:23:54.466 "name": "nvme0n1", 00:23:54.466 "aliases": [ 00:23:54.466 "a7a3a871-9ea0-44f8-8277-1ce73601449b" 00:23:54.466 ], 00:23:54.466 "product_name": "NVMe disk", 00:23:54.466 "block_size": 4096, 00:23:54.466 "num_blocks": 1310720, 00:23:54.466 "uuid": "a7a3a871-9ea0-44f8-8277-1ce73601449b", 00:23:54.466 "numa_id": -1, 00:23:54.466 "assigned_rate_limits": { 00:23:54.466 "rw_ios_per_sec": 0, 00:23:54.466 "rw_mbytes_per_sec": 0, 00:23:54.466 "r_mbytes_per_sec": 0, 00:23:54.466 "w_mbytes_per_sec": 0 00:23:54.466 }, 00:23:54.466 "claimed": false, 00:23:54.466 "zoned": false, 00:23:54.466 "supported_io_types": { 00:23:54.466 "read": true, 00:23:54.466 "write": true, 00:23:54.466 "unmap": true, 00:23:54.466 "flush": true, 00:23:54.467 "reset": true, 00:23:54.467 "nvme_admin": true, 00:23:54.467 "nvme_io": true, 00:23:54.467 "nvme_io_md": false, 00:23:54.467 "write_zeroes": true, 00:23:54.467 "zcopy": false, 00:23:54.467 "get_zone_info": false, 00:23:54.467 "zone_management": false, 00:23:54.467 "zone_append": false, 00:23:54.467 "compare": true, 00:23:54.467 "compare_and_write": false, 00:23:54.467 "abort": true, 00:23:54.467 "seek_hole": false, 00:23:54.467 "seek_data": false, 00:23:54.467 "copy": true, 00:23:54.467 "nvme_iov_md": false 00:23:54.467 }, 00:23:54.467 "driver_specific": { 00:23:54.467 "nvme": [ 00:23:54.467 { 00:23:54.467 "pci_address": "0000:00:11.0", 00:23:54.467 "trid": { 00:23:54.467 "trtype": "PCIe", 00:23:54.467 "traddr": "0000:00:11.0" 00:23:54.467 }, 00:23:54.467 "ctrlr_data": { 00:23:54.467 "cntlid": 0, 00:23:54.467 "vendor_id": "0x1b36", 00:23:54.467 "model_number": "QEMU NVMe Ctrl", 00:23:54.467 "serial_number": "12341", 00:23:54.467 "firmware_revision": "8.0.0", 00:23:54.467 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:54.467 "oacs": { 00:23:54.467 "security": 0, 00:23:54.467 "format": 1, 00:23:54.467 "firmware": 0, 00:23:54.467 "ns_manage": 1 00:23:54.467 }, 00:23:54.467 "multi_ctrlr": false, 00:23:54.467 "ana_reporting": false 00:23:54.467 }, 00:23:54.467 "vs": { 00:23:54.467 "nvme_version": "1.4" 00:23:54.467 }, 00:23:54.467 "ns_data": { 00:23:54.467 "id": 1, 00:23:54.467 "can_share": false 00:23:54.467 } 00:23:54.467 } 00:23:54.467 ], 00:23:54.467 "mp_policy": "active_passive" 00:23:54.467 } 00:23:54.467 } 00:23:54.467 ]' 00:23:54.467 06:50:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:54.467 06:50:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:23:54.467 06:50:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:54.724 06:50:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:23:54.724 06:50:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:23:54.724 06:50:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:23:54.724 06:50:07 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:23:54.724 06:50:07 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:23:54.724 06:50:07 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:23:54.724 06:50:07 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:54.724 06:50:07 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:54.724 06:50:07 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:23:54.724 06:50:07 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:54.982 06:50:07 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=08687166-3dd1-42b9-84d2-17e87febdd8d 00:23:54.982 06:50:07 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 08687166-3dd1-42b9-84d2-17e87febdd8d 00:23:55.256 06:50:07 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=cf557ef7-1811-4090-b054-5b3f645e4a92 00:23:55.256 06:50:07 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 cf557ef7-1811-4090-b054-5b3f645e4a92 00:23:55.256 06:50:07 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:23:55.256 06:50:07 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:55.256 06:50:07 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=cf557ef7-1811-4090-b054-5b3f645e4a92 00:23:55.256 06:50:07 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:23:55.256 06:50:07 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size cf557ef7-1811-4090-b054-5b3f645e4a92 00:23:55.256 06:50:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=cf557ef7-1811-4090-b054-5b3f645e4a92 00:23:55.256 06:50:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:55.256 06:50:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:23:55.256 06:50:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:23:55.256 06:50:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b cf557ef7-1811-4090-b054-5b3f645e4a92 00:23:55.512 06:50:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:55.512 { 00:23:55.512 "name": "cf557ef7-1811-4090-b054-5b3f645e4a92", 00:23:55.512 "aliases": [ 00:23:55.512 "lvs/nvme0n1p0" 00:23:55.512 ], 00:23:55.512 "product_name": "Logical Volume", 00:23:55.512 "block_size": 4096, 00:23:55.512 "num_blocks": 26476544, 00:23:55.512 "uuid": "cf557ef7-1811-4090-b054-5b3f645e4a92", 00:23:55.512 "assigned_rate_limits": { 00:23:55.512 "rw_ios_per_sec": 0, 00:23:55.512 "rw_mbytes_per_sec": 0, 00:23:55.512 "r_mbytes_per_sec": 0, 00:23:55.512 "w_mbytes_per_sec": 0 00:23:55.512 }, 00:23:55.512 "claimed": false, 00:23:55.512 "zoned": false, 00:23:55.512 "supported_io_types": { 00:23:55.512 "read": true, 00:23:55.512 "write": true, 00:23:55.512 "unmap": true, 00:23:55.512 "flush": false, 00:23:55.512 "reset": true, 00:23:55.512 "nvme_admin": false, 00:23:55.512 "nvme_io": false, 00:23:55.512 "nvme_io_md": false, 00:23:55.512 "write_zeroes": true, 00:23:55.512 "zcopy": false, 00:23:55.512 "get_zone_info": false, 00:23:55.512 "zone_management": false, 00:23:55.512 "zone_append": false, 00:23:55.512 "compare": false, 00:23:55.512 "compare_and_write": false, 00:23:55.512 "abort": false, 00:23:55.512 "seek_hole": true, 00:23:55.512 "seek_data": true, 00:23:55.512 "copy": false, 00:23:55.512 "nvme_iov_md": false 00:23:55.512 }, 00:23:55.512 "driver_specific": { 00:23:55.512 "lvol": { 00:23:55.512 "lvol_store_uuid": "08687166-3dd1-42b9-84d2-17e87febdd8d", 00:23:55.512 "base_bdev": "nvme0n1", 00:23:55.512 "thin_provision": true, 00:23:55.512 "num_allocated_clusters": 0, 00:23:55.512 "snapshot": false, 00:23:55.512 "clone": false, 00:23:55.512 "esnap_clone": false 00:23:55.512 } 00:23:55.512 } 00:23:55.512 } 00:23:55.512 ]' 00:23:55.512 06:50:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:55.512 06:50:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:23:55.512 06:50:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:55.512 06:50:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:55.512 06:50:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:55.512 06:50:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:23:55.513 06:50:08 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:23:55.513 06:50:08 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:23:55.513 06:50:08 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:23:55.770 06:50:08 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:55.770 06:50:08 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:55.770 06:50:08 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size cf557ef7-1811-4090-b054-5b3f645e4a92 00:23:55.770 06:50:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=cf557ef7-1811-4090-b054-5b3f645e4a92 00:23:55.770 06:50:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:55.770 06:50:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:23:55.770 06:50:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:23:55.770 06:50:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b cf557ef7-1811-4090-b054-5b3f645e4a92 00:23:56.028 06:50:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:56.028 { 00:23:56.028 "name": "cf557ef7-1811-4090-b054-5b3f645e4a92", 00:23:56.028 "aliases": [ 00:23:56.028 "lvs/nvme0n1p0" 00:23:56.028 ], 00:23:56.028 "product_name": "Logical Volume", 00:23:56.028 "block_size": 4096, 00:23:56.028 "num_blocks": 26476544, 00:23:56.028 "uuid": "cf557ef7-1811-4090-b054-5b3f645e4a92", 00:23:56.028 "assigned_rate_limits": { 00:23:56.028 "rw_ios_per_sec": 0, 00:23:56.028 "rw_mbytes_per_sec": 0, 00:23:56.028 "r_mbytes_per_sec": 0, 00:23:56.028 "w_mbytes_per_sec": 0 00:23:56.028 }, 00:23:56.028 "claimed": false, 00:23:56.028 "zoned": false, 00:23:56.028 "supported_io_types": { 00:23:56.028 "read": true, 00:23:56.028 "write": true, 00:23:56.028 "unmap": true, 00:23:56.028 "flush": false, 00:23:56.028 "reset": true, 00:23:56.028 "nvme_admin": false, 00:23:56.028 "nvme_io": false, 00:23:56.028 "nvme_io_md": false, 00:23:56.028 "write_zeroes": true, 00:23:56.028 "zcopy": false, 00:23:56.028 "get_zone_info": false, 00:23:56.028 "zone_management": false, 00:23:56.028 "zone_append": false, 00:23:56.028 "compare": false, 00:23:56.028 "compare_and_write": false, 00:23:56.028 "abort": false, 00:23:56.028 "seek_hole": true, 00:23:56.028 "seek_data": true, 00:23:56.028 "copy": false, 00:23:56.028 "nvme_iov_md": false 00:23:56.028 }, 00:23:56.028 "driver_specific": { 00:23:56.028 "lvol": { 00:23:56.028 "lvol_store_uuid": "08687166-3dd1-42b9-84d2-17e87febdd8d", 00:23:56.028 "base_bdev": "nvme0n1", 00:23:56.028 "thin_provision": true, 00:23:56.028 "num_allocated_clusters": 0, 00:23:56.028 "snapshot": false, 00:23:56.028 "clone": false, 00:23:56.028 "esnap_clone": false 00:23:56.028 } 00:23:56.028 } 00:23:56.028 } 00:23:56.028 ]' 00:23:56.028 06:50:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:56.028 06:50:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:23:56.028 06:50:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:56.028 06:50:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:56.028 06:50:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:56.028 06:50:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:23:56.028 06:50:08 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:23:56.028 06:50:08 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:56.286 06:50:08 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:23:56.286 06:50:08 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:23:56.286 06:50:08 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:23:56.286 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:23:56.286 06:50:08 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size cf557ef7-1811-4090-b054-5b3f645e4a92 00:23:56.286 06:50:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=cf557ef7-1811-4090-b054-5b3f645e4a92 00:23:56.286 06:50:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:56.286 06:50:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:23:56.286 06:50:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:23:56.286 06:50:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b cf557ef7-1811-4090-b054-5b3f645e4a92 00:23:56.544 06:50:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:56.544 { 00:23:56.544 "name": "cf557ef7-1811-4090-b054-5b3f645e4a92", 00:23:56.544 "aliases": [ 00:23:56.544 "lvs/nvme0n1p0" 00:23:56.544 ], 00:23:56.544 "product_name": "Logical Volume", 00:23:56.544 "block_size": 4096, 00:23:56.544 "num_blocks": 26476544, 00:23:56.544 "uuid": "cf557ef7-1811-4090-b054-5b3f645e4a92", 00:23:56.544 "assigned_rate_limits": { 00:23:56.544 "rw_ios_per_sec": 0, 00:23:56.544 "rw_mbytes_per_sec": 0, 00:23:56.544 "r_mbytes_per_sec": 0, 00:23:56.544 "w_mbytes_per_sec": 0 00:23:56.544 }, 00:23:56.544 "claimed": false, 00:23:56.544 "zoned": false, 00:23:56.544 "supported_io_types": { 00:23:56.544 "read": true, 00:23:56.544 "write": true, 00:23:56.544 "unmap": true, 00:23:56.544 "flush": false, 00:23:56.544 "reset": true, 00:23:56.544 "nvme_admin": false, 00:23:56.544 "nvme_io": false, 00:23:56.544 "nvme_io_md": false, 00:23:56.544 "write_zeroes": true, 00:23:56.544 "zcopy": false, 00:23:56.544 "get_zone_info": false, 00:23:56.544 "zone_management": false, 00:23:56.544 "zone_append": false, 00:23:56.544 "compare": false, 00:23:56.544 "compare_and_write": false, 00:23:56.544 "abort": false, 00:23:56.544 "seek_hole": true, 00:23:56.544 "seek_data": true, 00:23:56.544 "copy": false, 00:23:56.544 "nvme_iov_md": false 00:23:56.544 }, 00:23:56.544 "driver_specific": { 00:23:56.544 "lvol": { 00:23:56.544 "lvol_store_uuid": "08687166-3dd1-42b9-84d2-17e87febdd8d", 00:23:56.544 "base_bdev": "nvme0n1", 00:23:56.544 "thin_provision": true, 00:23:56.544 "num_allocated_clusters": 0, 00:23:56.544 "snapshot": false, 00:23:56.544 "clone": false, 00:23:56.544 "esnap_clone": false 00:23:56.544 } 00:23:56.544 } 00:23:56.544 } 00:23:56.544 ]' 00:23:56.544 06:50:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:56.544 06:50:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:23:56.544 06:50:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:56.544 06:50:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:56.544 06:50:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:56.544 06:50:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:23:56.544 06:50:09 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:23:56.544 06:50:09 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:23:56.544 06:50:09 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d cf557ef7-1811-4090-b054-5b3f645e4a92 -c nvc0n1p0 --l2p_dram_limit 60 00:23:56.802 [2024-12-06 06:50:09.365534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.802 [2024-12-06 06:50:09.365579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:56.802 [2024-12-06 06:50:09.365592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:56.802 [2024-12-06 06:50:09.365600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.802 [2024-12-06 06:50:09.365653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.802 [2024-12-06 06:50:09.365662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:56.802 [2024-12-06 06:50:09.365672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:23:56.802 [2024-12-06 06:50:09.365678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.802 [2024-12-06 06:50:09.365714] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:56.802 [2024-12-06 06:50:09.366308] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:56.802 [2024-12-06 06:50:09.366329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.802 [2024-12-06 06:50:09.366336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:56.802 [2024-12-06 06:50:09.366345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.625 ms 00:23:56.802 [2024-12-06 06:50:09.366351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.802 [2024-12-06 06:50:09.366409] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 09ff0377-93e2-4a5d-be3c-8c2126c2889d 00:23:56.802 [2024-12-06 06:50:09.367388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.802 [2024-12-06 06:50:09.367425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:56.802 [2024-12-06 06:50:09.367433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:23:56.802 [2024-12-06 06:50:09.367441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.802 [2024-12-06 06:50:09.372263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.802 [2024-12-06 06:50:09.372291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:56.802 [2024-12-06 06:50:09.372300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.761 ms 00:23:56.802 [2024-12-06 06:50:09.372307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.802 [2024-12-06 06:50:09.372390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.802 [2024-12-06 06:50:09.372399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:56.802 [2024-12-06 06:50:09.372406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:23:56.802 [2024-12-06 06:50:09.372415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.802 [2024-12-06 06:50:09.372460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.802 [2024-12-06 06:50:09.372479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:56.802 [2024-12-06 06:50:09.372486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:56.802 [2024-12-06 06:50:09.372493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.802 [2024-12-06 06:50:09.372517] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:56.802 [2024-12-06 06:50:09.375423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.802 [2024-12-06 06:50:09.375445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:56.802 [2024-12-06 06:50:09.375455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.910 ms 00:23:56.802 [2024-12-06 06:50:09.375471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.802 [2024-12-06 06:50:09.375513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.802 [2024-12-06 06:50:09.375520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:56.802 [2024-12-06 06:50:09.375527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:56.802 [2024-12-06 06:50:09.375533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.802 [2024-12-06 06:50:09.375555] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:56.802 [2024-12-06 06:50:09.375676] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:56.802 [2024-12-06 06:50:09.375692] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:56.802 [2024-12-06 06:50:09.375701] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:56.802 [2024-12-06 06:50:09.375711] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:56.802 [2024-12-06 06:50:09.375718] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:56.802 [2024-12-06 06:50:09.375727] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:56.802 [2024-12-06 06:50:09.375733] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:56.802 [2024-12-06 06:50:09.375739] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:56.802 [2024-12-06 06:50:09.375745] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:56.802 [2024-12-06 06:50:09.375752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.802 [2024-12-06 06:50:09.375759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:56.802 [2024-12-06 06:50:09.375766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:23:56.802 [2024-12-06 06:50:09.375772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.803 [2024-12-06 06:50:09.375844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.803 [2024-12-06 06:50:09.375851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:56.803 [2024-12-06 06:50:09.375858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:23:56.803 [2024-12-06 06:50:09.375864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.803 [2024-12-06 06:50:09.375960] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:56.803 [2024-12-06 06:50:09.375967] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:56.803 [2024-12-06 06:50:09.375976] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:56.803 [2024-12-06 06:50:09.375983] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:56.803 [2024-12-06 06:50:09.375990] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:56.803 [2024-12-06 06:50:09.375995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:56.803 [2024-12-06 06:50:09.376001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:56.803 [2024-12-06 06:50:09.376007] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:56.803 [2024-12-06 06:50:09.376014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:56.803 [2024-12-06 06:50:09.376020] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:56.803 [2024-12-06 06:50:09.376027] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:56.803 [2024-12-06 06:50:09.376032] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:56.803 [2024-12-06 06:50:09.376038] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:56.803 [2024-12-06 06:50:09.376044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:56.803 [2024-12-06 06:50:09.376050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:56.803 [2024-12-06 06:50:09.376055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:56.803 [2024-12-06 06:50:09.376063] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:56.803 [2024-12-06 06:50:09.376072] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:56.803 [2024-12-06 06:50:09.376079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:56.803 [2024-12-06 06:50:09.376085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:56.803 [2024-12-06 06:50:09.376092] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:56.803 [2024-12-06 06:50:09.376097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:56.803 [2024-12-06 06:50:09.376104] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:56.803 [2024-12-06 06:50:09.376109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:56.803 [2024-12-06 06:50:09.376115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:56.803 [2024-12-06 06:50:09.376120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:56.803 [2024-12-06 06:50:09.376127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:56.803 [2024-12-06 06:50:09.376132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:56.803 [2024-12-06 06:50:09.376138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:56.803 [2024-12-06 06:50:09.376143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:56.803 [2024-12-06 06:50:09.376149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:56.803 [2024-12-06 06:50:09.376154] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:56.803 [2024-12-06 06:50:09.376162] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:56.803 [2024-12-06 06:50:09.376178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:56.803 [2024-12-06 06:50:09.376185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:56.803 [2024-12-06 06:50:09.376190] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:56.803 [2024-12-06 06:50:09.376196] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:56.803 [2024-12-06 06:50:09.376201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:56.803 [2024-12-06 06:50:09.376207] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:56.803 [2024-12-06 06:50:09.376212] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:56.803 [2024-12-06 06:50:09.376219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:56.803 [2024-12-06 06:50:09.376224] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:56.803 [2024-12-06 06:50:09.376230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:56.803 [2024-12-06 06:50:09.376235] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:56.803 [2024-12-06 06:50:09.376243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:56.803 [2024-12-06 06:50:09.376248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:56.803 [2024-12-06 06:50:09.376255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:56.803 [2024-12-06 06:50:09.376261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:56.803 [2024-12-06 06:50:09.376268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:56.803 [2024-12-06 06:50:09.376275] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:56.803 [2024-12-06 06:50:09.376282] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:56.803 [2024-12-06 06:50:09.376288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:56.803 [2024-12-06 06:50:09.376295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:56.803 [2024-12-06 06:50:09.376301] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:56.803 [2024-12-06 06:50:09.376311] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:56.803 [2024-12-06 06:50:09.376318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:56.803 [2024-12-06 06:50:09.376325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:56.803 [2024-12-06 06:50:09.376331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:56.803 [2024-12-06 06:50:09.376338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:56.803 [2024-12-06 06:50:09.376344] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:56.803 [2024-12-06 06:50:09.376352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:56.803 [2024-12-06 06:50:09.376357] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:56.803 [2024-12-06 06:50:09.376364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:56.803 [2024-12-06 06:50:09.376370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:56.803 [2024-12-06 06:50:09.376378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:56.803 [2024-12-06 06:50:09.376384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:56.803 [2024-12-06 06:50:09.376390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:56.803 [2024-12-06 06:50:09.376396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:56.803 [2024-12-06 06:50:09.376403] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:56.803 [2024-12-06 06:50:09.376409] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:56.803 [2024-12-06 06:50:09.376416] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:56.803 [2024-12-06 06:50:09.376424] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:56.803 [2024-12-06 06:50:09.376431] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:56.803 [2024-12-06 06:50:09.376437] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:56.803 [2024-12-06 06:50:09.376443] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:56.803 [2024-12-06 06:50:09.376449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.803 [2024-12-06 06:50:09.376456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:56.803 [2024-12-06 06:50:09.376472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.545 ms 00:23:56.803 [2024-12-06 06:50:09.376479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.803 [2024-12-06 06:50:09.376538] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:23:56.803 [2024-12-06 06:50:09.376549] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:23:59.383 [2024-12-06 06:50:11.782980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.383 [2024-12-06 06:50:11.783042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:23:59.383 [2024-12-06 06:50:11.783057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2406.433 ms 00:23:59.383 [2024-12-06 06:50:11.783068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.383 [2024-12-06 06:50:11.808444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.383 [2024-12-06 06:50:11.808506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:59.383 [2024-12-06 06:50:11.808519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.177 ms 00:23:59.383 [2024-12-06 06:50:11.808529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.383 [2024-12-06 06:50:11.808673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.383 [2024-12-06 06:50:11.808686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:59.383 [2024-12-06 06:50:11.808695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:23:59.383 [2024-12-06 06:50:11.808706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.383 [2024-12-06 06:50:11.859077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.383 [2024-12-06 06:50:11.859133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:59.383 [2024-12-06 06:50:11.859149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.330 ms 00:23:59.383 [2024-12-06 06:50:11.859161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.383 [2024-12-06 06:50:11.859207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.383 [2024-12-06 06:50:11.859219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:59.383 [2024-12-06 06:50:11.859228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:59.383 [2024-12-06 06:50:11.859236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.383 [2024-12-06 06:50:11.859638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.383 [2024-12-06 06:50:11.859663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:59.383 [2024-12-06 06:50:11.859672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.330 ms 00:23:59.383 [2024-12-06 06:50:11.859683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.383 [2024-12-06 06:50:11.859818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.383 [2024-12-06 06:50:11.859829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:59.383 [2024-12-06 06:50:11.859837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:23:59.383 [2024-12-06 06:50:11.859847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.383 [2024-12-06 06:50:11.874172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.383 [2024-12-06 06:50:11.874209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:59.383 [2024-12-06 06:50:11.874219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.302 ms 00:23:59.383 [2024-12-06 06:50:11.874229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.383 [2024-12-06 06:50:11.885640] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:59.383 [2024-12-06 06:50:11.899512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.383 [2024-12-06 06:50:11.899548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:59.383 [2024-12-06 06:50:11.899565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.165 ms 00:23:59.383 [2024-12-06 06:50:11.899574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.383 [2024-12-06 06:50:11.958448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.383 [2024-12-06 06:50:11.958514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:23:59.383 [2024-12-06 06:50:11.958533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.829 ms 00:23:59.383 [2024-12-06 06:50:11.958541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.384 [2024-12-06 06:50:11.958733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.384 [2024-12-06 06:50:11.958744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:59.384 [2024-12-06 06:50:11.958758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:23:59.384 [2024-12-06 06:50:11.958766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.384 [2024-12-06 06:50:11.981256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.384 [2024-12-06 06:50:11.981298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:23:59.384 [2024-12-06 06:50:11.981312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.433 ms 00:23:59.384 [2024-12-06 06:50:11.981320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.384 [2024-12-06 06:50:12.003723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.384 [2024-12-06 06:50:12.003760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:23:59.384 [2024-12-06 06:50:12.003774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.353 ms 00:23:59.384 [2024-12-06 06:50:12.003782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.384 [2024-12-06 06:50:12.004348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.384 [2024-12-06 06:50:12.004364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:59.384 [2024-12-06 06:50:12.004374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.529 ms 00:23:59.384 [2024-12-06 06:50:12.004381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.384 [2024-12-06 06:50:12.069398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.384 [2024-12-06 06:50:12.069446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:23:59.384 [2024-12-06 06:50:12.069471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.966 ms 00:23:59.384 [2024-12-06 06:50:12.069482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.667 [2024-12-06 06:50:12.093411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.667 [2024-12-06 06:50:12.093455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:23:59.667 [2024-12-06 06:50:12.093478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.831 ms 00:23:59.667 [2024-12-06 06:50:12.093488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.668 [2024-12-06 06:50:12.116866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.668 [2024-12-06 06:50:12.116911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:23:59.668 [2024-12-06 06:50:12.116924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.335 ms 00:23:59.668 [2024-12-06 06:50:12.116933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.668 [2024-12-06 06:50:12.140074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.668 [2024-12-06 06:50:12.140116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:59.668 [2024-12-06 06:50:12.140130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.095 ms 00:23:59.668 [2024-12-06 06:50:12.140138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.668 [2024-12-06 06:50:12.140184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.668 [2024-12-06 06:50:12.140194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:59.668 [2024-12-06 06:50:12.140209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:59.668 [2024-12-06 06:50:12.140216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.668 [2024-12-06 06:50:12.140303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.668 [2024-12-06 06:50:12.140313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:59.668 [2024-12-06 06:50:12.140322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:23:59.668 [2024-12-06 06:50:12.140330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.668 [2024-12-06 06:50:12.141245] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2775.286 ms, result 0 00:23:59.668 { 00:23:59.668 "name": "ftl0", 00:23:59.668 "uuid": "09ff0377-93e2-4a5d-be3c-8c2126c2889d" 00:23:59.668 } 00:23:59.668 06:50:12 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:23:59.668 06:50:12 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:23:59.668 06:50:12 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:59.668 06:50:12 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:23:59.668 06:50:12 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:59.668 06:50:12 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:59.668 06:50:12 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:23:59.668 06:50:12 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:23:59.925 [ 00:23:59.925 { 00:23:59.925 "name": "ftl0", 00:23:59.925 "aliases": [ 00:23:59.925 "09ff0377-93e2-4a5d-be3c-8c2126c2889d" 00:23:59.925 ], 00:23:59.925 "product_name": "FTL disk", 00:23:59.925 "block_size": 4096, 00:23:59.925 "num_blocks": 20971520, 00:23:59.925 "uuid": "09ff0377-93e2-4a5d-be3c-8c2126c2889d", 00:23:59.925 "assigned_rate_limits": { 00:23:59.925 "rw_ios_per_sec": 0, 00:23:59.925 "rw_mbytes_per_sec": 0, 00:23:59.925 "r_mbytes_per_sec": 0, 00:23:59.926 "w_mbytes_per_sec": 0 00:23:59.926 }, 00:23:59.926 "claimed": false, 00:23:59.926 "zoned": false, 00:23:59.926 "supported_io_types": { 00:23:59.926 "read": true, 00:23:59.926 "write": true, 00:23:59.926 "unmap": true, 00:23:59.926 "flush": true, 00:23:59.926 "reset": false, 00:23:59.926 "nvme_admin": false, 00:23:59.926 "nvme_io": false, 00:23:59.926 "nvme_io_md": false, 00:23:59.926 "write_zeroes": true, 00:23:59.926 "zcopy": false, 00:23:59.926 "get_zone_info": false, 00:23:59.926 "zone_management": false, 00:23:59.926 "zone_append": false, 00:23:59.926 "compare": false, 00:23:59.926 "compare_and_write": false, 00:23:59.926 "abort": false, 00:23:59.926 "seek_hole": false, 00:23:59.926 "seek_data": false, 00:23:59.926 "copy": false, 00:23:59.926 "nvme_iov_md": false 00:23:59.926 }, 00:23:59.926 "driver_specific": { 00:23:59.926 "ftl": { 00:23:59.926 "base_bdev": "cf557ef7-1811-4090-b054-5b3f645e4a92", 00:23:59.926 "cache": "nvc0n1p0" 00:23:59.926 } 00:23:59.926 } 00:23:59.926 } 00:23:59.926 ] 00:23:59.926 06:50:12 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:23:59.926 06:50:12 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:23:59.926 06:50:12 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:24:00.183 06:50:12 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:24:00.183 06:50:12 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:24:00.442 [2024-12-06 06:50:12.957925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.442 [2024-12-06 06:50:12.957977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:00.442 [2024-12-06 06:50:12.957991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:00.442 [2024-12-06 06:50:12.958000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.442 [2024-12-06 06:50:12.958031] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:00.442 [2024-12-06 06:50:12.960630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.442 [2024-12-06 06:50:12.960658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:00.442 [2024-12-06 06:50:12.960670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.580 ms 00:24:00.442 [2024-12-06 06:50:12.960679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.442 [2024-12-06 06:50:12.961055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.442 [2024-12-06 06:50:12.961070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:00.442 [2024-12-06 06:50:12.961081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.342 ms 00:24:00.442 [2024-12-06 06:50:12.961089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.442 [2024-12-06 06:50:12.964329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.442 [2024-12-06 06:50:12.964359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:00.442 [2024-12-06 06:50:12.964370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.218 ms 00:24:00.442 [2024-12-06 06:50:12.964378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.442 [2024-12-06 06:50:12.970593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.442 [2024-12-06 06:50:12.970616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:00.442 [2024-12-06 06:50:12.970628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.192 ms 00:24:00.442 [2024-12-06 06:50:12.970636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.442 [2024-12-06 06:50:12.993807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.442 [2024-12-06 06:50:12.993853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:00.442 [2024-12-06 06:50:12.993879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.074 ms 00:24:00.442 [2024-12-06 06:50:12.993887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.442 [2024-12-06 06:50:13.008776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.442 [2024-12-06 06:50:13.008813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:00.442 [2024-12-06 06:50:13.008830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.835 ms 00:24:00.442 [2024-12-06 06:50:13.008840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.442 [2024-12-06 06:50:13.009029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.442 [2024-12-06 06:50:13.009040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:00.442 [2024-12-06 06:50:13.009051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:24:00.442 [2024-12-06 06:50:13.009058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.442 [2024-12-06 06:50:13.032224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.442 [2024-12-06 06:50:13.032264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:00.442 [2024-12-06 06:50:13.032278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.139 ms 00:24:00.442 [2024-12-06 06:50:13.032285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.442 [2024-12-06 06:50:13.055071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.442 [2024-12-06 06:50:13.055110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:00.442 [2024-12-06 06:50:13.055122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.739 ms 00:24:00.442 [2024-12-06 06:50:13.055130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.442 [2024-12-06 06:50:13.077641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.442 [2024-12-06 06:50:13.077679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:00.442 [2024-12-06 06:50:13.077693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.463 ms 00:24:00.442 [2024-12-06 06:50:13.077701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.442 [2024-12-06 06:50:13.099997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.442 [2024-12-06 06:50:13.100034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:00.442 [2024-12-06 06:50:13.100049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.204 ms 00:24:00.442 [2024-12-06 06:50:13.100057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.442 [2024-12-06 06:50:13.100103] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:00.442 [2024-12-06 06:50:13.100117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:00.442 [2024-12-06 06:50:13.100129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:00.442 [2024-12-06 06:50:13.100137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:00.442 [2024-12-06 06:50:13.100147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:00.442 [2024-12-06 06:50:13.100154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:00.442 [2024-12-06 06:50:13.100163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:00.442 [2024-12-06 06:50:13.100171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:00.442 [2024-12-06 06:50:13.100182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:00.442 [2024-12-06 06:50:13.100189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:00.443 [2024-12-06 06:50:13.100982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:00.444 [2024-12-06 06:50:13.100991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:00.444 [2024-12-06 06:50:13.101006] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:00.444 [2024-12-06 06:50:13.101015] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 09ff0377-93e2-4a5d-be3c-8c2126c2889d 00:24:00.444 [2024-12-06 06:50:13.101023] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:00.444 [2024-12-06 06:50:13.101033] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:00.444 [2024-12-06 06:50:13.101040] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:00.444 [2024-12-06 06:50:13.101051] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:00.444 [2024-12-06 06:50:13.101058] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:00.444 [2024-12-06 06:50:13.101067] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:00.444 [2024-12-06 06:50:13.101074] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:00.444 [2024-12-06 06:50:13.101082] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:00.444 [2024-12-06 06:50:13.101088] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:00.444 [2024-12-06 06:50:13.101097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.444 [2024-12-06 06:50:13.101104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:00.444 [2024-12-06 06:50:13.101114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.995 ms 00:24:00.444 [2024-12-06 06:50:13.101121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.444 [2024-12-06 06:50:13.113389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.444 [2024-12-06 06:50:13.113423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:00.444 [2024-12-06 06:50:13.113435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.226 ms 00:24:00.444 [2024-12-06 06:50:13.113443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.444 [2024-12-06 06:50:13.113804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.444 [2024-12-06 06:50:13.113817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:00.444 [2024-12-06 06:50:13.113827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:24:00.444 [2024-12-06 06:50:13.113834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.444 [2024-12-06 06:50:13.157493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:00.444 [2024-12-06 06:50:13.157537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:00.444 [2024-12-06 06:50:13.157550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:00.444 [2024-12-06 06:50:13.157558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.444 [2024-12-06 06:50:13.157627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:00.444 [2024-12-06 06:50:13.157635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:00.444 [2024-12-06 06:50:13.157644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:00.444 [2024-12-06 06:50:13.157652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.444 [2024-12-06 06:50:13.157743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:00.444 [2024-12-06 06:50:13.157755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:00.444 [2024-12-06 06:50:13.157765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:00.444 [2024-12-06 06:50:13.157773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.444 [2024-12-06 06:50:13.157799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:00.444 [2024-12-06 06:50:13.157807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:00.444 [2024-12-06 06:50:13.157816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:00.444 [2024-12-06 06:50:13.157823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.702 [2024-12-06 06:50:13.239090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:00.702 [2024-12-06 06:50:13.239140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:00.702 [2024-12-06 06:50:13.239153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:00.702 [2024-12-06 06:50:13.239161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.702 [2024-12-06 06:50:13.302102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:00.702 [2024-12-06 06:50:13.302144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:00.702 [2024-12-06 06:50:13.302156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:00.702 [2024-12-06 06:50:13.302164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.702 [2024-12-06 06:50:13.302252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:00.702 [2024-12-06 06:50:13.302262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:00.702 [2024-12-06 06:50:13.302274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:00.702 [2024-12-06 06:50:13.302282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.702 [2024-12-06 06:50:13.302338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:00.702 [2024-12-06 06:50:13.302347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:00.702 [2024-12-06 06:50:13.302356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:00.702 [2024-12-06 06:50:13.302363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.702 [2024-12-06 06:50:13.302478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:00.702 [2024-12-06 06:50:13.302489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:00.702 [2024-12-06 06:50:13.302498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:00.702 [2024-12-06 06:50:13.302507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.702 [2024-12-06 06:50:13.302557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:00.702 [2024-12-06 06:50:13.302566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:00.702 [2024-12-06 06:50:13.302575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:00.702 [2024-12-06 06:50:13.302582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.702 [2024-12-06 06:50:13.302625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:00.702 [2024-12-06 06:50:13.302633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:00.702 [2024-12-06 06:50:13.302642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:00.702 [2024-12-06 06:50:13.302650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.702 [2024-12-06 06:50:13.302701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:00.702 [2024-12-06 06:50:13.302711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:00.702 [2024-12-06 06:50:13.302720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:00.702 [2024-12-06 06:50:13.302727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.702 [2024-12-06 06:50:13.302879] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 344.930 ms, result 0 00:24:00.702 true 00:24:00.702 06:50:13 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 75525 00:24:00.702 06:50:13 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 75525 ']' 00:24:00.702 06:50:13 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 75525 00:24:00.702 06:50:13 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:24:00.702 06:50:13 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:00.702 06:50:13 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75525 00:24:00.702 06:50:13 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:00.702 06:50:13 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:00.702 killing process with pid 75525 00:24:00.702 06:50:13 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75525' 00:24:00.702 06:50:13 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 75525 00:24:00.702 06:50:13 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 75525 00:24:08.833 06:50:21 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:24:08.833 06:50:21 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:24:08.833 06:50:21 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:24:08.833 06:50:21 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:08.833 06:50:21 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:24:08.833 06:50:21 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:24:08.833 06:50:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:24:08.833 06:50:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:08.833 06:50:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:08.833 06:50:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:08.833 06:50:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:08.833 06:50:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:24:08.833 06:50:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:08.833 06:50:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:08.833 06:50:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:08.833 06:50:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:24:08.833 06:50:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:08.833 06:50:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:24:08.833 06:50:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:24:08.833 06:50:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:24:08.833 06:50:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:08.833 06:50:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:24:09.091 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:24:09.091 fio-3.35 00:24:09.091 Starting 1 thread 00:24:13.271 00:24:13.271 test: (groupid=0, jobs=1): err= 0: pid=75716: Fri Dec 6 06:50:25 2024 00:24:13.271 read: IOPS=1308, BW=86.9MiB/s (91.1MB/s)(255MiB/2930msec) 00:24:13.271 slat (nsec): min=3092, max=24388, avg=4594.68, stdev=2092.09 00:24:13.271 clat (usec): min=249, max=976, avg=342.61, stdev=55.03 00:24:13.271 lat (usec): min=253, max=981, avg=347.20, stdev=55.84 00:24:13.271 clat percentiles (usec): 00:24:13.271 | 1.00th=[ 265], 5.00th=[ 314], 10.00th=[ 318], 20.00th=[ 322], 00:24:13.271 | 30.00th=[ 322], 40.00th=[ 326], 50.00th=[ 326], 60.00th=[ 330], 00:24:13.271 | 70.00th=[ 334], 80.00th=[ 338], 90.00th=[ 400], 95.00th=[ 445], 00:24:13.271 | 99.00th=[ 570], 99.50th=[ 644], 99.90th=[ 832], 99.95th=[ 865], 00:24:13.271 | 99.99th=[ 979] 00:24:13.271 write: IOPS=1317, BW=87.5MiB/s (91.8MB/s)(256MiB/2926msec); 0 zone resets 00:24:13.271 slat (nsec): min=13796, max=61317, avg=19227.06, stdev=3604.97 00:24:13.271 clat (usec): min=300, max=1113, avg=382.20, stdev=77.95 00:24:13.271 lat (usec): min=322, max=1132, avg=401.42, stdev=78.39 00:24:13.271 clat percentiles (usec): 00:24:13.271 | 1.00th=[ 334], 5.00th=[ 338], 10.00th=[ 343], 20.00th=[ 347], 00:24:13.272 | 30.00th=[ 351], 40.00th=[ 351], 50.00th=[ 355], 60.00th=[ 359], 00:24:13.272 | 70.00th=[ 367], 80.00th=[ 408], 90.00th=[ 433], 95.00th=[ 545], 00:24:13.272 | 99.00th=[ 734], 99.50th=[ 807], 99.90th=[ 930], 99.95th=[ 1020], 00:24:13.272 | 99.99th=[ 1106] 00:24:13.272 bw ( KiB/s): min=83096, max=93976, per=100.00%, avg=89705.60, stdev=5138.93, samples=5 00:24:13.272 iops : min= 1222, max= 1382, avg=1319.20, stdev=75.57, samples=5 00:24:13.272 lat (usec) : 250=0.01%, 500=95.33%, 750=4.16%, 1000=0.47% 00:24:13.272 lat (msec) : 2=0.03% 00:24:13.272 cpu : usr=99.11%, sys=0.14%, ctx=9, majf=0, minf=1169 00:24:13.272 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:13.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:13.272 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:13.272 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:13.272 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:13.272 00:24:13.272 Run status group 0 (all jobs): 00:24:13.272 READ: bw=86.9MiB/s (91.1MB/s), 86.9MiB/s-86.9MiB/s (91.1MB/s-91.1MB/s), io=255MiB (267MB), run=2930-2930msec 00:24:13.272 WRITE: bw=87.5MiB/s (91.8MB/s), 87.5MiB/s-87.5MiB/s (91.8MB/s-91.8MB/s), io=256MiB (269MB), run=2926-2926msec 00:24:14.675 ----------------------------------------------------- 00:24:14.675 Suppressions used: 00:24:14.675 count bytes template 00:24:14.675 1 5 /usr/src/fio/parse.c 00:24:14.675 1 8 libtcmalloc_minimal.so 00:24:14.675 1 904 libcrypto.so 00:24:14.675 ----------------------------------------------------- 00:24:14.675 00:24:14.675 06:50:27 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:24:14.675 06:50:27 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:14.675 06:50:27 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:24:14.675 06:50:27 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:24:14.675 06:50:27 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:24:14.675 06:50:27 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:14.675 06:50:27 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:24:14.675 06:50:27 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:24:14.675 06:50:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:24:14.675 06:50:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:14.675 06:50:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:14.675 06:50:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:14.675 06:50:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:14.675 06:50:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:24:14.675 06:50:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:14.675 06:50:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:14.675 06:50:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:14.675 06:50:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:24:14.675 06:50:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:14.675 06:50:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:24:14.675 06:50:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:24:14.675 06:50:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:24:14.675 06:50:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:14.675 06:50:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:24:14.675 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:24:14.675 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:24:14.675 fio-3.35 00:24:14.675 Starting 2 threads 00:24:41.252 00:24:41.252 first_half: (groupid=0, jobs=1): err= 0: pid=75808: Fri Dec 6 06:50:50 2024 00:24:41.252 read: IOPS=2934, BW=11.5MiB/s (12.0MB/s)(255MiB/22234msec) 00:24:41.252 slat (nsec): min=3117, max=21021, avg=3866.35, stdev=695.76 00:24:41.252 clat (usec): min=564, max=278956, avg=32999.98, stdev=17250.50 00:24:41.252 lat (usec): min=568, max=278961, avg=33003.84, stdev=17250.52 00:24:41.252 clat percentiles (msec): 00:24:41.252 | 1.00th=[ 7], 5.00th=[ 20], 10.00th=[ 27], 20.00th=[ 30], 00:24:41.252 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:24:41.252 | 70.00th=[ 32], 80.00th=[ 35], 90.00th=[ 38], 95.00th=[ 43], 00:24:41.252 | 99.00th=[ 130], 99.50th=[ 150], 99.90th=[ 205], 99.95th=[ 239], 00:24:41.252 | 99.99th=[ 271] 00:24:41.252 write: IOPS=3473, BW=13.6MiB/s (14.2MB/s)(256MiB/18869msec); 0 zone resets 00:24:41.252 slat (usec): min=3, max=345, avg= 5.60, stdev= 3.12 00:24:41.252 clat (usec): min=365, max=77255, avg=10547.00, stdev=16981.81 00:24:41.252 lat (usec): min=370, max=77260, avg=10552.60, stdev=16981.82 00:24:41.252 clat percentiles (usec): 00:24:41.252 | 1.00th=[ 635], 5.00th=[ 725], 10.00th=[ 807], 20.00th=[ 1123], 00:24:41.252 | 30.00th=[ 2638], 40.00th=[ 4178], 50.00th=[ 5014], 60.00th=[ 5538], 00:24:41.252 | 70.00th=[ 6915], 80.00th=[10421], 90.00th=[29492], 95.00th=[60556], 00:24:41.252 | 99.00th=[65799], 99.50th=[68682], 99.90th=[74974], 99.95th=[76022], 00:24:41.252 | 99.99th=[77071] 00:24:41.252 bw ( KiB/s): min= 992, max=50248, per=82.02%, avg=22791.96, stdev=13190.95, samples=23 00:24:41.252 iops : min= 248, max=12562, avg=5697.96, stdev=3297.70, samples=23 00:24:41.252 lat (usec) : 500=0.04%, 750=3.23%, 1000=5.09% 00:24:41.252 lat (msec) : 2=5.31%, 4=6.03%, 10=21.74%, 20=5.03%, 50=47.12% 00:24:41.252 lat (msec) : 100=5.52%, 250=0.87%, 500=0.02% 00:24:41.252 cpu : usr=99.46%, sys=0.09%, ctx=47, majf=0, minf=5597 00:24:41.252 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:24:41.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:41.252 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:41.252 issued rwts: total=65237,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:41.252 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:41.252 second_half: (groupid=0, jobs=1): err= 0: pid=75809: Fri Dec 6 06:50:50 2024 00:24:41.252 read: IOPS=2950, BW=11.5MiB/s (12.1MB/s)(254MiB/22077msec) 00:24:41.252 slat (nsec): min=3102, max=19530, avg=3889.72, stdev=775.58 00:24:41.252 clat (usec): min=602, max=283686, avg=33836.56, stdev=15991.49 00:24:41.252 lat (usec): min=606, max=283691, avg=33840.45, stdev=15991.50 00:24:41.252 clat percentiles (msec): 00:24:41.252 | 1.00th=[ 4], 5.00th=[ 27], 10.00th=[ 28], 20.00th=[ 30], 00:24:41.252 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:24:41.253 | 70.00th=[ 33], 80.00th=[ 35], 90.00th=[ 39], 95.00th=[ 46], 00:24:41.253 | 99.00th=[ 126], 99.50th=[ 144], 99.90th=[ 163], 99.95th=[ 174], 00:24:41.253 | 99.99th=[ 279] 00:24:41.253 write: IOPS=4217, BW=16.5MiB/s (17.3MB/s)(256MiB/15538msec); 0 zone resets 00:24:41.253 slat (usec): min=3, max=314, avg= 5.66, stdev= 2.65 00:24:41.253 clat (usec): min=374, max=77095, avg=9461.57, stdev=16582.36 00:24:41.253 lat (usec): min=382, max=77101, avg=9467.24, stdev=16582.37 00:24:41.253 clat percentiles (usec): 00:24:41.253 | 1.00th=[ 660], 5.00th=[ 734], 10.00th=[ 816], 20.00th=[ 1045], 00:24:41.253 | 30.00th=[ 1352], 40.00th=[ 2868], 50.00th=[ 4228], 60.00th=[ 5276], 00:24:41.253 | 70.00th=[ 6128], 80.00th=[10028], 90.00th=[12780], 95.00th=[60556], 00:24:41.253 | 99.00th=[65799], 99.50th=[67634], 99.90th=[73925], 99.95th=[74974], 00:24:41.253 | 99.99th=[76022] 00:24:41.253 bw ( KiB/s): min= 320, max=43008, per=99.31%, avg=27594.11, stdev=13852.78, samples=19 00:24:41.253 iops : min= 80, max=10752, avg=6898.53, stdev=3463.20, samples=19 00:24:41.253 lat (usec) : 500=0.01%, 750=2.97%, 1000=6.07% 00:24:41.253 lat (msec) : 2=8.37%, 4=7.10%, 10=16.34%, 20=5.86%, 50=46.59% 00:24:41.253 lat (msec) : 100=5.89%, 250=0.80%, 500=0.01% 00:24:41.253 cpu : usr=99.27%, sys=0.12%, ctx=28, majf=0, minf=5524 00:24:41.253 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:41.253 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:41.253 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:41.253 issued rwts: total=65143,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:41.253 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:41.253 00:24:41.253 Run status group 0 (all jobs): 00:24:41.253 READ: bw=22.9MiB/s (24.0MB/s), 11.5MiB/s-11.5MiB/s (12.0MB/s-12.1MB/s), io=509MiB (534MB), run=22077-22234msec 00:24:41.253 WRITE: bw=27.1MiB/s (28.5MB/s), 13.6MiB/s-16.5MiB/s (14.2MB/s-17.3MB/s), io=512MiB (537MB), run=15538-18869msec 00:24:41.253 ----------------------------------------------------- 00:24:41.253 Suppressions used: 00:24:41.253 count bytes template 00:24:41.253 2 10 /usr/src/fio/parse.c 00:24:41.253 2 192 /usr/src/fio/iolog.c 00:24:41.253 1 8 libtcmalloc_minimal.so 00:24:41.253 1 904 libcrypto.so 00:24:41.253 ----------------------------------------------------- 00:24:41.253 00:24:41.253 06:50:52 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:24:41.253 06:50:52 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:41.253 06:50:52 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:24:41.253 06:50:52 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:24:41.253 06:50:52 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:24:41.253 06:50:52 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:41.253 06:50:52 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:24:41.253 06:50:52 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:24:41.253 06:50:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:24:41.253 06:50:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:41.253 06:50:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:41.253 06:50:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:41.253 06:50:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:41.253 06:50:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:24:41.253 06:50:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:41.253 06:50:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:41.253 06:50:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:24:41.253 06:50:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:41.253 06:50:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:41.253 06:50:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:24:41.253 06:50:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:24:41.253 06:50:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:24:41.253 06:50:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:41.253 06:50:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:24:41.253 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:24:41.253 fio-3.35 00:24:41.253 Starting 1 thread 00:24:56.237 00:24:56.237 test: (groupid=0, jobs=1): err= 0: pid=76105: Fri Dec 6 06:51:06 2024 00:24:56.237 read: IOPS=8093, BW=31.6MiB/s (33.2MB/s)(255MiB/8056msec) 00:24:56.237 slat (nsec): min=3073, max=23013, avg=3609.09, stdev=641.50 00:24:56.237 clat (usec): min=498, max=31242, avg=15807.72, stdev=1605.10 00:24:56.237 lat (usec): min=502, max=31246, avg=15811.33, stdev=1605.11 00:24:56.237 clat percentiles (usec): 00:24:56.237 | 1.00th=[13960], 5.00th=[14484], 10.00th=[14746], 20.00th=[15008], 00:24:56.237 | 30.00th=[15139], 40.00th=[15270], 50.00th=[15533], 60.00th=[15664], 00:24:56.237 | 70.00th=[15795], 80.00th=[15926], 90.00th=[17171], 95.00th=[19530], 00:24:56.237 | 99.00th=[22414], 99.50th=[23725], 99.90th=[25822], 99.95th=[27395], 00:24:56.237 | 99.99th=[30540] 00:24:56.237 write: IOPS=14.8k, BW=57.7MiB/s (60.6MB/s)(256MiB/4433msec); 0 zone resets 00:24:56.237 slat (usec): min=4, max=425, avg= 6.16, stdev= 2.84 00:24:56.237 clat (usec): min=484, max=101150, avg=8614.11, stdev=10592.99 00:24:56.237 lat (usec): min=490, max=101155, avg=8620.27, stdev=10592.96 00:24:56.237 clat percentiles (usec): 00:24:56.237 | 1.00th=[ 660], 5.00th=[ 807], 10.00th=[ 914], 20.00th=[ 1057], 00:24:56.237 | 30.00th=[ 1237], 40.00th=[ 4178], 50.00th=[ 6128], 60.00th=[ 6980], 00:24:56.237 | 70.00th=[ 8029], 80.00th=[ 9372], 90.00th=[29754], 95.00th=[31851], 00:24:56.237 | 99.00th=[36963], 99.50th=[38536], 99.90th=[84411], 99.95th=[90702], 00:24:56.237 | 99.99th=[98042] 00:24:56.237 bw ( KiB/s): min=50832, max=74104, per=98.51%, avg=58254.22, stdev=7675.08, samples=9 00:24:56.237 iops : min=12708, max=18526, avg=14563.56, stdev=1918.77, samples=9 00:24:56.237 lat (usec) : 500=0.01%, 750=1.65%, 1000=6.17% 00:24:56.237 lat (msec) : 2=11.30%, 4=0.92%, 10=20.78%, 20=49.52%, 50=9.56% 00:24:56.237 lat (msec) : 100=0.09%, 250=0.01% 00:24:56.237 cpu : usr=99.04%, sys=0.28%, ctx=22, majf=0, minf=5565 00:24:56.237 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:24:56.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:56.237 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:56.237 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:56.237 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:56.237 00:24:56.237 Run status group 0 (all jobs): 00:24:56.237 READ: bw=31.6MiB/s (33.2MB/s), 31.6MiB/s-31.6MiB/s (33.2MB/s-33.2MB/s), io=255MiB (267MB), run=8056-8056msec 00:24:56.237 WRITE: bw=57.7MiB/s (60.6MB/s), 57.7MiB/s-57.7MiB/s (60.6MB/s-60.6MB/s), io=256MiB (268MB), run=4433-4433msec 00:24:56.237 ----------------------------------------------------- 00:24:56.237 Suppressions used: 00:24:56.237 count bytes template 00:24:56.237 1 5 /usr/src/fio/parse.c 00:24:56.237 2 192 /usr/src/fio/iolog.c 00:24:56.237 1 8 libtcmalloc_minimal.so 00:24:56.237 1 904 libcrypto.so 00:24:56.237 ----------------------------------------------------- 00:24:56.237 00:24:56.237 06:51:08 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:24:56.237 06:51:08 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:56.237 06:51:08 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:24:56.237 06:51:08 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:56.237 06:51:08 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:24:56.237 Remove shared memory files 00:24:56.237 06:51:08 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:24:56.237 06:51:08 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:24:56.237 06:51:08 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:24:56.237 06:51:08 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57231 /dev/shm/spdk_tgt_trace.pid74454 00:24:56.237 06:51:08 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:24:56.237 06:51:08 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:24:56.237 00:24:56.237 real 1m2.624s 00:24:56.237 user 2m14.409s 00:24:56.237 sys 0m2.509s 00:24:56.237 06:51:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:56.237 06:51:08 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:24:56.237 ************************************ 00:24:56.237 END TEST ftl_fio_basic 00:24:56.237 ************************************ 00:24:56.237 06:51:08 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:24:56.237 06:51:08 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:24:56.237 06:51:08 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:56.237 06:51:08 ftl -- common/autotest_common.sh@10 -- # set +x 00:24:56.237 ************************************ 00:24:56.237 START TEST ftl_bdevperf 00:24:56.237 ************************************ 00:24:56.237 06:51:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:24:56.237 * Looking for test storage... 00:24:56.237 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:24:56.237 06:51:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:56.237 06:51:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:24:56.237 06:51:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:56.237 06:51:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:56.237 06:51:08 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:56.237 06:51:08 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:56.237 06:51:08 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:56.237 06:51:08 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:24:56.237 06:51:08 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:24:56.237 06:51:08 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:24:56.237 06:51:08 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:24:56.237 06:51:08 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:24:56.237 06:51:08 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:24:56.237 06:51:08 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:24:56.237 06:51:08 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:56.238 06:51:08 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:24:56.238 06:51:08 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:24:56.238 06:51:08 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:56.238 06:51:08 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:56.238 06:51:08 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:24:56.238 06:51:08 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:24:56.238 06:51:08 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:56.238 06:51:08 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:24:56.238 06:51:08 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:56.238 06:51:08 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:24:56.238 06:51:08 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:24:56.238 06:51:08 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:56.238 06:51:08 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:24:56.238 06:51:08 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:56.238 06:51:08 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:56.238 06:51:08 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:56.238 06:51:08 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:24:56.238 06:51:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:56.238 06:51:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:56.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:56.238 --rc genhtml_branch_coverage=1 00:24:56.238 --rc genhtml_function_coverage=1 00:24:56.238 --rc genhtml_legend=1 00:24:56.238 --rc geninfo_all_blocks=1 00:24:56.238 --rc geninfo_unexecuted_blocks=1 00:24:56.238 00:24:56.238 ' 00:24:56.238 06:51:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:56.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:56.238 --rc genhtml_branch_coverage=1 00:24:56.238 --rc genhtml_function_coverage=1 00:24:56.238 --rc genhtml_legend=1 00:24:56.238 --rc geninfo_all_blocks=1 00:24:56.238 --rc geninfo_unexecuted_blocks=1 00:24:56.238 00:24:56.238 ' 00:24:56.238 06:51:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:56.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:56.238 --rc genhtml_branch_coverage=1 00:24:56.238 --rc genhtml_function_coverage=1 00:24:56.238 --rc genhtml_legend=1 00:24:56.238 --rc geninfo_all_blocks=1 00:24:56.238 --rc geninfo_unexecuted_blocks=1 00:24:56.238 00:24:56.238 ' 00:24:56.238 06:51:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:56.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:56.238 --rc genhtml_branch_coverage=1 00:24:56.238 --rc genhtml_function_coverage=1 00:24:56.238 --rc genhtml_legend=1 00:24:56.238 --rc geninfo_all_blocks=1 00:24:56.238 --rc geninfo_unexecuted_blocks=1 00:24:56.238 00:24:56.238 ' 00:24:56.238 06:51:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:24:56.238 06:51:08 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:24:56.238 06:51:08 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:24:56.238 06:51:08 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:24:56.238 06:51:08 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:24:56.238 06:51:08 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:56.238 06:51:08 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:56.238 06:51:08 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:24:56.238 06:51:08 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:24:56.238 06:51:08 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:56.238 06:51:08 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:56.238 06:51:08 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:24:56.238 06:51:08 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:24:56.238 06:51:08 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:56.238 06:51:08 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:56.238 06:51:08 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:24:56.238 06:51:08 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:24:56.238 06:51:08 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:56.238 06:51:08 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:56.238 06:51:08 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:24:56.238 06:51:08 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:24:56.238 06:51:08 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:56.238 06:51:08 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:56.238 06:51:08 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:56.238 06:51:08 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:56.238 06:51:08 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:24:56.238 06:51:08 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:24:56.238 06:51:08 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:56.238 06:51:08 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:56.238 06:51:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:24:56.238 06:51:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:24:56.238 06:51:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:24:56.238 06:51:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:56.238 06:51:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:24:56.238 06:51:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=76332 00:24:56.238 06:51:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:24:56.238 06:51:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 76332 00:24:56.238 06:51:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:24:56.238 06:51:08 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 76332 ']' 00:24:56.238 06:51:08 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:56.238 06:51:08 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:56.238 06:51:08 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:56.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:56.238 06:51:08 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:56.238 06:51:08 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:56.238 [2024-12-06 06:51:08.512109] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:24:56.238 [2024-12-06 06:51:08.512206] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76332 ] 00:24:56.238 [2024-12-06 06:51:08.663826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:56.238 [2024-12-06 06:51:08.761922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:56.806 06:51:09 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:56.806 06:51:09 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:24:56.806 06:51:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:24:56.806 06:51:09 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:24:56.806 06:51:09 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:24:56.806 06:51:09 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:24:56.806 06:51:09 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:24:56.806 06:51:09 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:24:57.065 06:51:09 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:24:57.065 06:51:09 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:24:57.065 06:51:09 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:24:57.065 06:51:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:24:57.065 06:51:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:57.065 06:51:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:24:57.065 06:51:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:24:57.065 06:51:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:24:57.326 06:51:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:57.326 { 00:24:57.326 "name": "nvme0n1", 00:24:57.326 "aliases": [ 00:24:57.326 "71aa3c4e-8400-46e4-92b9-e04400ffb63b" 00:24:57.326 ], 00:24:57.326 "product_name": "NVMe disk", 00:24:57.326 "block_size": 4096, 00:24:57.326 "num_blocks": 1310720, 00:24:57.326 "uuid": "71aa3c4e-8400-46e4-92b9-e04400ffb63b", 00:24:57.326 "numa_id": -1, 00:24:57.326 "assigned_rate_limits": { 00:24:57.326 "rw_ios_per_sec": 0, 00:24:57.326 "rw_mbytes_per_sec": 0, 00:24:57.326 "r_mbytes_per_sec": 0, 00:24:57.326 "w_mbytes_per_sec": 0 00:24:57.326 }, 00:24:57.326 "claimed": true, 00:24:57.326 "claim_type": "read_many_write_one", 00:24:57.326 "zoned": false, 00:24:57.326 "supported_io_types": { 00:24:57.326 "read": true, 00:24:57.326 "write": true, 00:24:57.326 "unmap": true, 00:24:57.326 "flush": true, 00:24:57.326 "reset": true, 00:24:57.326 "nvme_admin": true, 00:24:57.326 "nvme_io": true, 00:24:57.326 "nvme_io_md": false, 00:24:57.326 "write_zeroes": true, 00:24:57.326 "zcopy": false, 00:24:57.326 "get_zone_info": false, 00:24:57.326 "zone_management": false, 00:24:57.326 "zone_append": false, 00:24:57.326 "compare": true, 00:24:57.326 "compare_and_write": false, 00:24:57.326 "abort": true, 00:24:57.326 "seek_hole": false, 00:24:57.326 "seek_data": false, 00:24:57.326 "copy": true, 00:24:57.326 "nvme_iov_md": false 00:24:57.326 }, 00:24:57.326 "driver_specific": { 00:24:57.326 "nvme": [ 00:24:57.326 { 00:24:57.326 "pci_address": "0000:00:11.0", 00:24:57.326 "trid": { 00:24:57.326 "trtype": "PCIe", 00:24:57.326 "traddr": "0000:00:11.0" 00:24:57.326 }, 00:24:57.326 "ctrlr_data": { 00:24:57.326 "cntlid": 0, 00:24:57.326 "vendor_id": "0x1b36", 00:24:57.326 "model_number": "QEMU NVMe Ctrl", 00:24:57.326 "serial_number": "12341", 00:24:57.326 "firmware_revision": "8.0.0", 00:24:57.326 "subnqn": "nqn.2019-08.org.qemu:12341", 00:24:57.326 "oacs": { 00:24:57.326 "security": 0, 00:24:57.326 "format": 1, 00:24:57.326 "firmware": 0, 00:24:57.326 "ns_manage": 1 00:24:57.326 }, 00:24:57.326 "multi_ctrlr": false, 00:24:57.326 "ana_reporting": false 00:24:57.326 }, 00:24:57.326 "vs": { 00:24:57.326 "nvme_version": "1.4" 00:24:57.326 }, 00:24:57.326 "ns_data": { 00:24:57.326 "id": 1, 00:24:57.326 "can_share": false 00:24:57.326 } 00:24:57.326 } 00:24:57.326 ], 00:24:57.326 "mp_policy": "active_passive" 00:24:57.326 } 00:24:57.326 } 00:24:57.326 ]' 00:24:57.326 06:51:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:57.326 06:51:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:24:57.326 06:51:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:57.326 06:51:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:24:57.326 06:51:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:24:57.326 06:51:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:24:57.326 06:51:09 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:24:57.326 06:51:09 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:24:57.326 06:51:09 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:24:57.326 06:51:09 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:57.326 06:51:09 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:24:57.587 06:51:10 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=08687166-3dd1-42b9-84d2-17e87febdd8d 00:24:57.587 06:51:10 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:24:57.587 06:51:10 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 08687166-3dd1-42b9-84d2-17e87febdd8d 00:24:57.848 06:51:10 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:24:57.848 06:51:10 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=f9f06661-c6b8-45d0-946d-e7d1858d86fb 00:24:57.848 06:51:10 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u f9f06661-c6b8-45d0-946d-e7d1858d86fb 00:24:58.109 06:51:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=c7279b5b-5787-45c0-8f24-b8b8e43a8380 00:24:58.109 06:51:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 c7279b5b-5787-45c0-8f24-b8b8e43a8380 00:24:58.109 06:51:10 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:24:58.109 06:51:10 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:24:58.109 06:51:10 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=c7279b5b-5787-45c0-8f24-b8b8e43a8380 00:24:58.109 06:51:10 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:24:58.109 06:51:10 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size c7279b5b-5787-45c0-8f24-b8b8e43a8380 00:24:58.109 06:51:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=c7279b5b-5787-45c0-8f24-b8b8e43a8380 00:24:58.109 06:51:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:58.110 06:51:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:24:58.110 06:51:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:24:58.110 06:51:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c7279b5b-5787-45c0-8f24-b8b8e43a8380 00:24:58.371 06:51:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:58.371 { 00:24:58.371 "name": "c7279b5b-5787-45c0-8f24-b8b8e43a8380", 00:24:58.372 "aliases": [ 00:24:58.372 "lvs/nvme0n1p0" 00:24:58.372 ], 00:24:58.372 "product_name": "Logical Volume", 00:24:58.372 "block_size": 4096, 00:24:58.372 "num_blocks": 26476544, 00:24:58.372 "uuid": "c7279b5b-5787-45c0-8f24-b8b8e43a8380", 00:24:58.372 "assigned_rate_limits": { 00:24:58.372 "rw_ios_per_sec": 0, 00:24:58.372 "rw_mbytes_per_sec": 0, 00:24:58.372 "r_mbytes_per_sec": 0, 00:24:58.372 "w_mbytes_per_sec": 0 00:24:58.372 }, 00:24:58.372 "claimed": false, 00:24:58.372 "zoned": false, 00:24:58.372 "supported_io_types": { 00:24:58.372 "read": true, 00:24:58.372 "write": true, 00:24:58.372 "unmap": true, 00:24:58.372 "flush": false, 00:24:58.372 "reset": true, 00:24:58.372 "nvme_admin": false, 00:24:58.372 "nvme_io": false, 00:24:58.372 "nvme_io_md": false, 00:24:58.372 "write_zeroes": true, 00:24:58.372 "zcopy": false, 00:24:58.372 "get_zone_info": false, 00:24:58.372 "zone_management": false, 00:24:58.372 "zone_append": false, 00:24:58.372 "compare": false, 00:24:58.372 "compare_and_write": false, 00:24:58.372 "abort": false, 00:24:58.372 "seek_hole": true, 00:24:58.372 "seek_data": true, 00:24:58.372 "copy": false, 00:24:58.372 "nvme_iov_md": false 00:24:58.372 }, 00:24:58.372 "driver_specific": { 00:24:58.372 "lvol": { 00:24:58.372 "lvol_store_uuid": "f9f06661-c6b8-45d0-946d-e7d1858d86fb", 00:24:58.372 "base_bdev": "nvme0n1", 00:24:58.372 "thin_provision": true, 00:24:58.372 "num_allocated_clusters": 0, 00:24:58.372 "snapshot": false, 00:24:58.372 "clone": false, 00:24:58.372 "esnap_clone": false 00:24:58.372 } 00:24:58.372 } 00:24:58.372 } 00:24:58.372 ]' 00:24:58.372 06:51:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:58.372 06:51:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:24:58.372 06:51:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:58.372 06:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:58.372 06:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:58.372 06:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:24:58.372 06:51:11 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:24:58.372 06:51:11 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:24:58.372 06:51:11 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:24:58.634 06:51:11 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:24:58.634 06:51:11 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:24:58.634 06:51:11 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size c7279b5b-5787-45c0-8f24-b8b8e43a8380 00:24:58.634 06:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=c7279b5b-5787-45c0-8f24-b8b8e43a8380 00:24:58.634 06:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:58.634 06:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:24:58.634 06:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:24:58.634 06:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c7279b5b-5787-45c0-8f24-b8b8e43a8380 00:24:58.897 06:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:58.897 { 00:24:58.897 "name": "c7279b5b-5787-45c0-8f24-b8b8e43a8380", 00:24:58.897 "aliases": [ 00:24:58.897 "lvs/nvme0n1p0" 00:24:58.897 ], 00:24:58.897 "product_name": "Logical Volume", 00:24:58.897 "block_size": 4096, 00:24:58.897 "num_blocks": 26476544, 00:24:58.897 "uuid": "c7279b5b-5787-45c0-8f24-b8b8e43a8380", 00:24:58.897 "assigned_rate_limits": { 00:24:58.897 "rw_ios_per_sec": 0, 00:24:58.897 "rw_mbytes_per_sec": 0, 00:24:58.897 "r_mbytes_per_sec": 0, 00:24:58.897 "w_mbytes_per_sec": 0 00:24:58.897 }, 00:24:58.897 "claimed": false, 00:24:58.897 "zoned": false, 00:24:58.897 "supported_io_types": { 00:24:58.897 "read": true, 00:24:58.897 "write": true, 00:24:58.897 "unmap": true, 00:24:58.897 "flush": false, 00:24:58.897 "reset": true, 00:24:58.897 "nvme_admin": false, 00:24:58.897 "nvme_io": false, 00:24:58.897 "nvme_io_md": false, 00:24:58.897 "write_zeroes": true, 00:24:58.897 "zcopy": false, 00:24:58.897 "get_zone_info": false, 00:24:58.897 "zone_management": false, 00:24:58.897 "zone_append": false, 00:24:58.897 "compare": false, 00:24:58.897 "compare_and_write": false, 00:24:58.897 "abort": false, 00:24:58.897 "seek_hole": true, 00:24:58.897 "seek_data": true, 00:24:58.897 "copy": false, 00:24:58.897 "nvme_iov_md": false 00:24:58.897 }, 00:24:58.897 "driver_specific": { 00:24:58.897 "lvol": { 00:24:58.897 "lvol_store_uuid": "f9f06661-c6b8-45d0-946d-e7d1858d86fb", 00:24:58.897 "base_bdev": "nvme0n1", 00:24:58.897 "thin_provision": true, 00:24:58.897 "num_allocated_clusters": 0, 00:24:58.897 "snapshot": false, 00:24:58.897 "clone": false, 00:24:58.897 "esnap_clone": false 00:24:58.897 } 00:24:58.897 } 00:24:58.897 } 00:24:58.897 ]' 00:24:58.897 06:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:58.897 06:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:24:58.897 06:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:58.897 06:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:58.897 06:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:58.897 06:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:24:58.897 06:51:11 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:24:58.897 06:51:11 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:24:59.182 06:51:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:24:59.182 06:51:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size c7279b5b-5787-45c0-8f24-b8b8e43a8380 00:24:59.182 06:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=c7279b5b-5787-45c0-8f24-b8b8e43a8380 00:24:59.182 06:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:59.182 06:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:24:59.182 06:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:24:59.182 06:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c7279b5b-5787-45c0-8f24-b8b8e43a8380 00:24:59.445 06:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:59.445 { 00:24:59.445 "name": "c7279b5b-5787-45c0-8f24-b8b8e43a8380", 00:24:59.445 "aliases": [ 00:24:59.445 "lvs/nvme0n1p0" 00:24:59.445 ], 00:24:59.445 "product_name": "Logical Volume", 00:24:59.446 "block_size": 4096, 00:24:59.446 "num_blocks": 26476544, 00:24:59.446 "uuid": "c7279b5b-5787-45c0-8f24-b8b8e43a8380", 00:24:59.446 "assigned_rate_limits": { 00:24:59.446 "rw_ios_per_sec": 0, 00:24:59.446 "rw_mbytes_per_sec": 0, 00:24:59.446 "r_mbytes_per_sec": 0, 00:24:59.446 "w_mbytes_per_sec": 0 00:24:59.446 }, 00:24:59.446 "claimed": false, 00:24:59.446 "zoned": false, 00:24:59.446 "supported_io_types": { 00:24:59.446 "read": true, 00:24:59.446 "write": true, 00:24:59.446 "unmap": true, 00:24:59.446 "flush": false, 00:24:59.446 "reset": true, 00:24:59.446 "nvme_admin": false, 00:24:59.446 "nvme_io": false, 00:24:59.446 "nvme_io_md": false, 00:24:59.446 "write_zeroes": true, 00:24:59.446 "zcopy": false, 00:24:59.446 "get_zone_info": false, 00:24:59.446 "zone_management": false, 00:24:59.446 "zone_append": false, 00:24:59.446 "compare": false, 00:24:59.446 "compare_and_write": false, 00:24:59.446 "abort": false, 00:24:59.446 "seek_hole": true, 00:24:59.446 "seek_data": true, 00:24:59.446 "copy": false, 00:24:59.446 "nvme_iov_md": false 00:24:59.446 }, 00:24:59.446 "driver_specific": { 00:24:59.446 "lvol": { 00:24:59.446 "lvol_store_uuid": "f9f06661-c6b8-45d0-946d-e7d1858d86fb", 00:24:59.446 "base_bdev": "nvme0n1", 00:24:59.446 "thin_provision": true, 00:24:59.446 "num_allocated_clusters": 0, 00:24:59.446 "snapshot": false, 00:24:59.446 "clone": false, 00:24:59.446 "esnap_clone": false 00:24:59.446 } 00:24:59.446 } 00:24:59.446 } 00:24:59.446 ]' 00:24:59.446 06:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:59.446 06:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:24:59.446 06:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:59.446 06:51:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:59.446 06:51:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:59.446 06:51:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:24:59.446 06:51:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:24:59.446 06:51:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d c7279b5b-5787-45c0-8f24-b8b8e43a8380 -c nvc0n1p0 --l2p_dram_limit 20 00:24:59.709 [2024-12-06 06:51:12.211185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.709 [2024-12-06 06:51:12.211237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:59.709 [2024-12-06 06:51:12.211249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:59.709 [2024-12-06 06:51:12.211258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.709 [2024-12-06 06:51:12.211304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.709 [2024-12-06 06:51:12.211314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:59.709 [2024-12-06 06:51:12.211321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:24:59.709 [2024-12-06 06:51:12.211330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.709 [2024-12-06 06:51:12.211343] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:59.709 [2024-12-06 06:51:12.211942] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:59.709 [2024-12-06 06:51:12.211960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.709 [2024-12-06 06:51:12.211968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:59.709 [2024-12-06 06:51:12.211975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.621 ms 00:24:59.709 [2024-12-06 06:51:12.211982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.709 [2024-12-06 06:51:12.212085] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID b56d5da3-4708-4c22-9517-491021f51ac7 00:24:59.709 [2024-12-06 06:51:12.213027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.709 [2024-12-06 06:51:12.213056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:24:59.709 [2024-12-06 06:51:12.213067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:24:59.709 [2024-12-06 06:51:12.213073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.709 [2024-12-06 06:51:12.217888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.709 [2024-12-06 06:51:12.217916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:59.709 [2024-12-06 06:51:12.217925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.780 ms 00:24:59.709 [2024-12-06 06:51:12.217933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.709 [2024-12-06 06:51:12.218001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.709 [2024-12-06 06:51:12.218008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:59.709 [2024-12-06 06:51:12.218019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:24:59.709 [2024-12-06 06:51:12.218024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.709 [2024-12-06 06:51:12.218059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.709 [2024-12-06 06:51:12.218067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:59.709 [2024-12-06 06:51:12.218075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:59.709 [2024-12-06 06:51:12.218081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.709 [2024-12-06 06:51:12.218099] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:59.709 [2024-12-06 06:51:12.221034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.709 [2024-12-06 06:51:12.221062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:59.709 [2024-12-06 06:51:12.221070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.943 ms 00:24:59.709 [2024-12-06 06:51:12.221079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.709 [2024-12-06 06:51:12.221105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.709 [2024-12-06 06:51:12.221113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:59.709 [2024-12-06 06:51:12.221119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:59.709 [2024-12-06 06:51:12.221126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.709 [2024-12-06 06:51:12.221137] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:24:59.709 [2024-12-06 06:51:12.221252] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:59.709 [2024-12-06 06:51:12.221265] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:59.709 [2024-12-06 06:51:12.221276] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:59.709 [2024-12-06 06:51:12.221284] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:59.709 [2024-12-06 06:51:12.221294] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:59.709 [2024-12-06 06:51:12.221300] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:59.709 [2024-12-06 06:51:12.221307] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:59.709 [2024-12-06 06:51:12.221313] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:59.709 [2024-12-06 06:51:12.221320] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:59.709 [2024-12-06 06:51:12.221328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.709 [2024-12-06 06:51:12.221335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:59.709 [2024-12-06 06:51:12.221341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.192 ms 00:24:59.709 [2024-12-06 06:51:12.221349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.709 [2024-12-06 06:51:12.221416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.709 [2024-12-06 06:51:12.221424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:59.709 [2024-12-06 06:51:12.221430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:24:59.709 [2024-12-06 06:51:12.221438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.709 [2024-12-06 06:51:12.221518] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:59.709 [2024-12-06 06:51:12.221533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:59.709 [2024-12-06 06:51:12.221539] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:59.709 [2024-12-06 06:51:12.221547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:59.709 [2024-12-06 06:51:12.221553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:59.709 [2024-12-06 06:51:12.221560] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:59.709 [2024-12-06 06:51:12.221565] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:59.709 [2024-12-06 06:51:12.221572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:59.709 [2024-12-06 06:51:12.221577] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:59.709 [2024-12-06 06:51:12.221585] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:59.709 [2024-12-06 06:51:12.221590] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:59.709 [2024-12-06 06:51:12.221602] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:59.709 [2024-12-06 06:51:12.221607] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:59.709 [2024-12-06 06:51:12.221614] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:59.709 [2024-12-06 06:51:12.221619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:59.709 [2024-12-06 06:51:12.221628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:59.709 [2024-12-06 06:51:12.221633] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:59.709 [2024-12-06 06:51:12.221640] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:59.709 [2024-12-06 06:51:12.221645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:59.709 [2024-12-06 06:51:12.221652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:59.709 [2024-12-06 06:51:12.221657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:59.709 [2024-12-06 06:51:12.221663] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:59.709 [2024-12-06 06:51:12.221669] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:59.709 [2024-12-06 06:51:12.221676] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:59.709 [2024-12-06 06:51:12.221681] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:59.709 [2024-12-06 06:51:12.221687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:59.709 [2024-12-06 06:51:12.221692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:59.710 [2024-12-06 06:51:12.221699] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:59.710 [2024-12-06 06:51:12.221704] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:59.710 [2024-12-06 06:51:12.221710] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:59.710 [2024-12-06 06:51:12.221715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:59.710 [2024-12-06 06:51:12.221723] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:59.710 [2024-12-06 06:51:12.221728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:59.710 [2024-12-06 06:51:12.221736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:59.710 [2024-12-06 06:51:12.221741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:59.710 [2024-12-06 06:51:12.221748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:59.710 [2024-12-06 06:51:12.221752] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:59.710 [2024-12-06 06:51:12.221759] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:59.710 [2024-12-06 06:51:12.221764] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:59.710 [2024-12-06 06:51:12.221771] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:59.710 [2024-12-06 06:51:12.221776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:59.710 [2024-12-06 06:51:12.221783] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:59.710 [2024-12-06 06:51:12.221788] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:59.710 [2024-12-06 06:51:12.221794] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:59.710 [2024-12-06 06:51:12.221800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:59.710 [2024-12-06 06:51:12.221807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:59.710 [2024-12-06 06:51:12.221812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:59.710 [2024-12-06 06:51:12.221821] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:59.710 [2024-12-06 06:51:12.221827] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:59.710 [2024-12-06 06:51:12.221834] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:59.710 [2024-12-06 06:51:12.221839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:59.710 [2024-12-06 06:51:12.221846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:59.710 [2024-12-06 06:51:12.221851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:59.710 [2024-12-06 06:51:12.221859] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:59.710 [2024-12-06 06:51:12.221866] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:59.710 [2024-12-06 06:51:12.221874] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:59.710 [2024-12-06 06:51:12.221879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:59.710 [2024-12-06 06:51:12.221886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:59.710 [2024-12-06 06:51:12.221891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:59.710 [2024-12-06 06:51:12.221898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:59.710 [2024-12-06 06:51:12.221904] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:59.710 [2024-12-06 06:51:12.221911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:59.710 [2024-12-06 06:51:12.221917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:59.710 [2024-12-06 06:51:12.221925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:59.710 [2024-12-06 06:51:12.221930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:59.710 [2024-12-06 06:51:12.221937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:59.710 [2024-12-06 06:51:12.221942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:59.710 [2024-12-06 06:51:12.221949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:59.710 [2024-12-06 06:51:12.221954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:59.710 [2024-12-06 06:51:12.221961] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:59.710 [2024-12-06 06:51:12.221968] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:59.710 [2024-12-06 06:51:12.221978] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:59.710 [2024-12-06 06:51:12.221983] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:59.710 [2024-12-06 06:51:12.221990] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:59.710 [2024-12-06 06:51:12.221996] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:59.710 [2024-12-06 06:51:12.222003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.710 [2024-12-06 06:51:12.222009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:59.710 [2024-12-06 06:51:12.222016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.546 ms 00:24:59.710 [2024-12-06 06:51:12.222021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.710 [2024-12-06 06:51:12.222062] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:24:59.710 [2024-12-06 06:51:12.222070] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:25:02.309 [2024-12-06 06:51:14.686525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.309 [2024-12-06 06:51:14.686586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:25:02.309 [2024-12-06 06:51:14.686601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2464.452 ms 00:25:02.309 [2024-12-06 06:51:14.686609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.309 [2024-12-06 06:51:14.712198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.309 [2024-12-06 06:51:14.712246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:02.309 [2024-12-06 06:51:14.712261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.338 ms 00:25:02.309 [2024-12-06 06:51:14.712269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.309 [2024-12-06 06:51:14.712398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.309 [2024-12-06 06:51:14.712409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:02.309 [2024-12-06 06:51:14.712420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:25:02.309 [2024-12-06 06:51:14.712428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.309 [2024-12-06 06:51:14.751378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.309 [2024-12-06 06:51:14.751439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:02.309 [2024-12-06 06:51:14.751455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.892 ms 00:25:02.309 [2024-12-06 06:51:14.751473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.309 [2024-12-06 06:51:14.751525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.309 [2024-12-06 06:51:14.751534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:02.309 [2024-12-06 06:51:14.751544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:02.309 [2024-12-06 06:51:14.751553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.309 [2024-12-06 06:51:14.751923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.309 [2024-12-06 06:51:14.751950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:02.309 [2024-12-06 06:51:14.751961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.307 ms 00:25:02.309 [2024-12-06 06:51:14.751969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.309 [2024-12-06 06:51:14.752093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.309 [2024-12-06 06:51:14.752103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:02.309 [2024-12-06 06:51:14.752115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:25:02.309 [2024-12-06 06:51:14.752123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.309 [2024-12-06 06:51:14.765162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.309 [2024-12-06 06:51:14.765200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:02.309 [2024-12-06 06:51:14.765212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.020 ms 00:25:02.309 [2024-12-06 06:51:14.765228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.309 [2024-12-06 06:51:14.776688] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:25:02.309 [2024-12-06 06:51:14.781771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.309 [2024-12-06 06:51:14.781810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:02.309 [2024-12-06 06:51:14.781822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.461 ms 00:25:02.309 [2024-12-06 06:51:14.781832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.309 [2024-12-06 06:51:14.847932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.309 [2024-12-06 06:51:14.848002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:25:02.309 [2024-12-06 06:51:14.848017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.067 ms 00:25:02.309 [2024-12-06 06:51:14.848027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.309 [2024-12-06 06:51:14.848206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.309 [2024-12-06 06:51:14.848221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:02.309 [2024-12-06 06:51:14.848230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:25:02.309 [2024-12-06 06:51:14.848242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.309 [2024-12-06 06:51:14.871065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.309 [2024-12-06 06:51:14.871122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:25:02.309 [2024-12-06 06:51:14.871135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.779 ms 00:25:02.309 [2024-12-06 06:51:14.871145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.309 [2024-12-06 06:51:14.893551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.309 [2024-12-06 06:51:14.893606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:25:02.309 [2024-12-06 06:51:14.893618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.367 ms 00:25:02.309 [2024-12-06 06:51:14.893627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.309 [2024-12-06 06:51:14.894191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.309 [2024-12-06 06:51:14.894215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:02.309 [2024-12-06 06:51:14.894224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.530 ms 00:25:02.309 [2024-12-06 06:51:14.894233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.309 [2024-12-06 06:51:14.964322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.309 [2024-12-06 06:51:14.964389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:25:02.309 [2024-12-06 06:51:14.964403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.053 ms 00:25:02.309 [2024-12-06 06:51:14.964412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.309 [2024-12-06 06:51:14.988500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.309 [2024-12-06 06:51:14.988557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:25:02.309 [2024-12-06 06:51:14.988572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.997 ms 00:25:02.309 [2024-12-06 06:51:14.988581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.309 [2024-12-06 06:51:15.012232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.309 [2024-12-06 06:51:15.012297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:25:02.309 [2024-12-06 06:51:15.012309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.608 ms 00:25:02.309 [2024-12-06 06:51:15.012317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.309 [2024-12-06 06:51:15.035166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.309 [2024-12-06 06:51:15.035216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:02.309 [2024-12-06 06:51:15.035228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.813 ms 00:25:02.309 [2024-12-06 06:51:15.035237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.309 [2024-12-06 06:51:15.035275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.309 [2024-12-06 06:51:15.035290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:02.309 [2024-12-06 06:51:15.035299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:02.309 [2024-12-06 06:51:15.035308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.309 [2024-12-06 06:51:15.035386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.309 [2024-12-06 06:51:15.035414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:02.309 [2024-12-06 06:51:15.035423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:25:02.309 [2024-12-06 06:51:15.035432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.309 [2024-12-06 06:51:15.036283] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2824.690 ms, result 0 00:25:02.309 { 00:25:02.309 "name": "ftl0", 00:25:02.309 "uuid": "b56d5da3-4708-4c22-9517-491021f51ac7" 00:25:02.309 } 00:25:02.571 06:51:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:25:02.571 06:51:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:25:02.571 06:51:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:25:02.571 06:51:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:25:02.830 [2024-12-06 06:51:15.348575] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:25:02.830 I/O size of 69632 is greater than zero copy threshold (65536). 00:25:02.830 Zero copy mechanism will not be used. 00:25:02.830 Running I/O for 4 seconds... 00:25:04.714 1759.00 IOPS, 116.81 MiB/s [2024-12-06T06:51:18.398Z] 1974.00 IOPS, 131.09 MiB/s [2024-12-06T06:51:19.786Z] 1928.67 IOPS, 128.08 MiB/s [2024-12-06T06:51:19.786Z] 1868.00 IOPS, 124.05 MiB/s 00:25:07.045 Latency(us) 00:25:07.045 [2024-12-06T06:51:19.786Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:07.045 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:25:07.045 ftl0 : 4.00 1867.42 124.01 0.00 0.00 559.71 155.96 11846.89 00:25:07.045 [2024-12-06T06:51:19.786Z] =================================================================================================================== 00:25:07.045 [2024-12-06T06:51:19.786Z] Total : 1867.42 124.01 0.00 0.00 559.71 155.96 11846.89 00:25:07.045 [2024-12-06 06:51:19.358541] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:25:07.045 { 00:25:07.045 "results": [ 00:25:07.045 { 00:25:07.045 "job": "ftl0", 00:25:07.045 "core_mask": "0x1", 00:25:07.045 "workload": "randwrite", 00:25:07.045 "status": "finished", 00:25:07.045 "queue_depth": 1, 00:25:07.045 "io_size": 69632, 00:25:07.045 "runtime": 4.001784, 00:25:07.045 "iops": 1867.4171319591462, 00:25:07.045 "mibps": 124.00816891916206, 00:25:07.045 "io_failed": 0, 00:25:07.045 "io_timeout": 0, 00:25:07.045 "avg_latency_us": 559.7133047174958, 00:25:07.045 "min_latency_us": 155.96307692307693, 00:25:07.045 "max_latency_us": 11846.892307692307 00:25:07.045 } 00:25:07.045 ], 00:25:07.045 "core_count": 1 00:25:07.045 } 00:25:07.045 06:51:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:25:07.045 [2024-12-06 06:51:19.469562] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:25:07.045 Running I/O for 4 seconds... 00:25:08.936 9535.00 IOPS, 37.25 MiB/s [2024-12-06T06:51:22.621Z] 9398.50 IOPS, 36.71 MiB/s [2024-12-06T06:51:23.566Z] 9032.67 IOPS, 35.28 MiB/s [2024-12-06T06:51:23.566Z] 8721.50 IOPS, 34.07 MiB/s 00:25:10.825 Latency(us) 00:25:10.825 [2024-12-06T06:51:23.566Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:10.825 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:25:10.825 ftl0 : 4.02 8707.70 34.01 0.00 0.00 14662.65 248.91 32667.18 00:25:10.825 [2024-12-06T06:51:23.566Z] =================================================================================================================== 00:25:10.825 [2024-12-06T06:51:23.566Z] Total : 8707.70 34.01 0.00 0.00 14662.65 0.00 32667.18 00:25:10.825 [2024-12-06 06:51:23.499305] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:25:10.825 { 00:25:10.825 "results": [ 00:25:10.825 { 00:25:10.825 "job": "ftl0", 00:25:10.825 "core_mask": "0x1", 00:25:10.825 "workload": "randwrite", 00:25:10.825 "status": "finished", 00:25:10.825 "queue_depth": 128, 00:25:10.825 "io_size": 4096, 00:25:10.825 "runtime": 4.021041, 00:25:10.825 "iops": 8707.695345558526, 00:25:10.825 "mibps": 34.01443494358799, 00:25:10.825 "io_failed": 0, 00:25:10.825 "io_timeout": 0, 00:25:10.825 "avg_latency_us": 14662.652540038931, 00:25:10.825 "min_latency_us": 248.91076923076923, 00:25:10.825 "max_latency_us": 32667.175384615384 00:25:10.825 } 00:25:10.825 ], 00:25:10.825 "core_count": 1 00:25:10.825 } 00:25:10.825 06:51:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:25:11.170 [2024-12-06 06:51:23.597220] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:25:11.170 Running I/O for 4 seconds... 00:25:13.063 6966.00 IOPS, 27.21 MiB/s [2024-12-06T06:51:26.746Z] 7154.50 IOPS, 27.95 MiB/s [2024-12-06T06:51:27.690Z] 7032.00 IOPS, 27.47 MiB/s [2024-12-06T06:51:27.690Z] 7011.50 IOPS, 27.39 MiB/s 00:25:14.949 Latency(us) 00:25:14.949 [2024-12-06T06:51:27.690Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:14.949 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:14.949 Verification LBA range: start 0x0 length 0x1400000 00:25:14.949 ftl0 : 4.01 7023.83 27.44 0.00 0.00 18169.32 277.27 37103.46 00:25:14.949 [2024-12-06T06:51:27.690Z] =================================================================================================================== 00:25:14.949 [2024-12-06T06:51:27.690Z] Total : 7023.83 27.44 0.00 0.00 18169.32 0.00 37103.46 00:25:14.949 { 00:25:14.949 "results": [ 00:25:14.949 { 00:25:14.949 "job": "ftl0", 00:25:14.949 "core_mask": "0x1", 00:25:14.949 "workload": "verify", 00:25:14.949 "status": "finished", 00:25:14.949 "verify_range": { 00:25:14.949 "start": 0, 00:25:14.949 "length": 20971520 00:25:14.949 }, 00:25:14.949 "queue_depth": 128, 00:25:14.949 "io_size": 4096, 00:25:14.949 "runtime": 4.011058, 00:25:14.949 "iops": 7023.832614736561, 00:25:14.949 "mibps": 27.43684615131469, 00:25:14.949 "io_failed": 0, 00:25:14.949 "io_timeout": 0, 00:25:14.949 "avg_latency_us": 18169.318553006287, 00:25:14.949 "min_latency_us": 277.2676923076923, 00:25:14.949 "max_latency_us": 37103.45846153846 00:25:14.949 } 00:25:14.949 ], 00:25:14.949 "core_count": 1 00:25:14.949 } 00:25:14.949 [2024-12-06 06:51:27.627541] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:25:14.949 06:51:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:25:15.209 [2024-12-06 06:51:27.829960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.209 [2024-12-06 06:51:27.830030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:15.209 [2024-12-06 06:51:27.830045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:15.209 [2024-12-06 06:51:27.830056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.209 [2024-12-06 06:51:27.830080] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:15.209 [2024-12-06 06:51:27.832855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.209 [2024-12-06 06:51:27.832886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:15.209 [2024-12-06 06:51:27.832898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.755 ms 00:25:15.209 [2024-12-06 06:51:27.832907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.209 [2024-12-06 06:51:27.834408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.209 [2024-12-06 06:51:27.834442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:15.209 [2024-12-06 06:51:27.834460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.476 ms 00:25:15.209 [2024-12-06 06:51:27.834483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.470 [2024-12-06 06:51:28.003169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.471 [2024-12-06 06:51:28.003241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:15.471 [2024-12-06 06:51:28.003262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 168.654 ms 00:25:15.471 [2024-12-06 06:51:28.003273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.471 [2024-12-06 06:51:28.009519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.471 [2024-12-06 06:51:28.009565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:15.471 [2024-12-06 06:51:28.009577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.208 ms 00:25:15.471 [2024-12-06 06:51:28.009590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.471 [2024-12-06 06:51:28.033661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.471 [2024-12-06 06:51:28.033697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:15.471 [2024-12-06 06:51:28.033712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.012 ms 00:25:15.471 [2024-12-06 06:51:28.033720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.471 [2024-12-06 06:51:28.048368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.471 [2024-12-06 06:51:28.048404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:15.471 [2024-12-06 06:51:28.048416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.610 ms 00:25:15.471 [2024-12-06 06:51:28.048425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.471 [2024-12-06 06:51:28.048578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.471 [2024-12-06 06:51:28.048591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:15.471 [2024-12-06 06:51:28.048605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:25:15.471 [2024-12-06 06:51:28.048613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.471 [2024-12-06 06:51:28.071562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.471 [2024-12-06 06:51:28.071599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:15.471 [2024-12-06 06:51:28.071613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.910 ms 00:25:15.471 [2024-12-06 06:51:28.071621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.471 [2024-12-06 06:51:28.094072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.471 [2024-12-06 06:51:28.094107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:15.471 [2024-12-06 06:51:28.094121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.413 ms 00:25:15.471 [2024-12-06 06:51:28.094129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.471 [2024-12-06 06:51:28.116561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.471 [2024-12-06 06:51:28.116610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:15.471 [2024-12-06 06:51:28.116623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.393 ms 00:25:15.471 [2024-12-06 06:51:28.116631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.471 [2024-12-06 06:51:28.138121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.471 [2024-12-06 06:51:28.138152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:15.471 [2024-12-06 06:51:28.138168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.419 ms 00:25:15.471 [2024-12-06 06:51:28.138177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.471 [2024-12-06 06:51:28.138212] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:15.471 [2024-12-06 06:51:28.138227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:15.471 [2024-12-06 06:51:28.138240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:15.471 [2024-12-06 06:51:28.138248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:15.471 [2024-12-06 06:51:28.138257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:15.471 [2024-12-06 06:51:28.138265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:15.471 [2024-12-06 06:51:28.138275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:15.471 [2024-12-06 06:51:28.138283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:15.471 [2024-12-06 06:51:28.138292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:15.471 [2024-12-06 06:51:28.138299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:15.471 [2024-12-06 06:51:28.138309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:15.471 [2024-12-06 06:51:28.138317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:15.471 [2024-12-06 06:51:28.138327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:15.471 [2024-12-06 06:51:28.138335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:15.471 [2024-12-06 06:51:28.138348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:15.471 [2024-12-06 06:51:28.138356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:15.471 [2024-12-06 06:51:28.138365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:15.471 [2024-12-06 06:51:28.138373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:15.471 [2024-12-06 06:51:28.138381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:15.471 [2024-12-06 06:51:28.138389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:15.471 [2024-12-06 06:51:28.138400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:15.471 [2024-12-06 06:51:28.138408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:15.471 [2024-12-06 06:51:28.138417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:15.471 [2024-12-06 06:51:28.138424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:15.471 [2024-12-06 06:51:28.138433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:15.471 [2024-12-06 06:51:28.138440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:15.471 [2024-12-06 06:51:28.138449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:15.471 [2024-12-06 06:51:28.138457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:15.471 [2024-12-06 06:51:28.138479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:15.471 [2024-12-06 06:51:28.138487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:15.471 [2024-12-06 06:51:28.138499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:15.471 [2024-12-06 06:51:28.138507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:15.471 [2024-12-06 06:51:28.138517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:15.471 [2024-12-06 06:51:28.138525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:15.471 [2024-12-06 06:51:28.138535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:15.471 [2024-12-06 06:51:28.138543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:15.471 [2024-12-06 06:51:28.138553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:15.471 [2024-12-06 06:51:28.138560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:15.471 [2024-12-06 06:51:28.138570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:15.471 [2024-12-06 06:51:28.138586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:15.471 [2024-12-06 06:51:28.138595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:15.471 [2024-12-06 06:51:28.138603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:15.471 [2024-12-06 06:51:28.138612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:15.471 [2024-12-06 06:51:28.138620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:15.471 [2024-12-06 06:51:28.138629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:15.471 [2024-12-06 06:51:28.138637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:15.471 [2024-12-06 06:51:28.138650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:15.471 [2024-12-06 06:51:28.138657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:15.471 [2024-12-06 06:51:28.138666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:15.471 [2024-12-06 06:51:28.138674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:15.471 [2024-12-06 06:51:28.138683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:15.471 [2024-12-06 06:51:28.138691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:15.471 [2024-12-06 06:51:28.138701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:15.471 [2024-12-06 06:51:28.138709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:15.471 [2024-12-06 06:51:28.138718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:15.471 [2024-12-06 06:51:28.138726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:15.471 [2024-12-06 06:51:28.138735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:15.471 [2024-12-06 06:51:28.138742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:15.472 [2024-12-06 06:51:28.138752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:15.472 [2024-12-06 06:51:28.138761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:15.472 [2024-12-06 06:51:28.138771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:15.472 [2024-12-06 06:51:28.138778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:15.472 [2024-12-06 06:51:28.138791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:15.472 [2024-12-06 06:51:28.138799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:15.472 [2024-12-06 06:51:28.138809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:15.472 [2024-12-06 06:51:28.138816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:15.472 [2024-12-06 06:51:28.138826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:15.472 [2024-12-06 06:51:28.138834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:15.472 [2024-12-06 06:51:28.138843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:15.472 [2024-12-06 06:51:28.138851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:15.472 [2024-12-06 06:51:28.138860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:15.472 [2024-12-06 06:51:28.138867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:15.472 [2024-12-06 06:51:28.138877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:15.472 [2024-12-06 06:51:28.138884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:15.472 [2024-12-06 06:51:28.138894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:15.472 [2024-12-06 06:51:28.138901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:15.472 [2024-12-06 06:51:28.138910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:15.472 [2024-12-06 06:51:28.138918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:15.472 [2024-12-06 06:51:28.138930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:15.472 [2024-12-06 06:51:28.138938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:15.472 [2024-12-06 06:51:28.138947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:15.472 [2024-12-06 06:51:28.138955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:15.472 [2024-12-06 06:51:28.138964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:15.472 [2024-12-06 06:51:28.138972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:15.472 [2024-12-06 06:51:28.138981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:15.472 [2024-12-06 06:51:28.138988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:15.472 [2024-12-06 06:51:28.138997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:15.472 [2024-12-06 06:51:28.139005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:15.472 [2024-12-06 06:51:28.139014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:15.472 [2024-12-06 06:51:28.139021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:15.472 [2024-12-06 06:51:28.139030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:15.472 [2024-12-06 06:51:28.139037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:15.472 [2024-12-06 06:51:28.139045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:15.472 [2024-12-06 06:51:28.139053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:15.472 [2024-12-06 06:51:28.139065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:15.472 [2024-12-06 06:51:28.139074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:15.472 [2024-12-06 06:51:28.139083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:15.472 [2024-12-06 06:51:28.139090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:15.472 [2024-12-06 06:51:28.139100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:15.472 [2024-12-06 06:51:28.139108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:15.472 [2024-12-06 06:51:28.139116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:15.472 [2024-12-06 06:51:28.139133] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:15.472 [2024-12-06 06:51:28.139143] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b56d5da3-4708-4c22-9517-491021f51ac7 00:25:15.472 [2024-12-06 06:51:28.139155] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:15.472 [2024-12-06 06:51:28.139163] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:15.472 [2024-12-06 06:51:28.139171] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:15.472 [2024-12-06 06:51:28.139181] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:15.472 [2024-12-06 06:51:28.139188] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:15.472 [2024-12-06 06:51:28.139198] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:15.472 [2024-12-06 06:51:28.139205] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:15.472 [2024-12-06 06:51:28.139216] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:15.472 [2024-12-06 06:51:28.139223] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:15.472 [2024-12-06 06:51:28.139232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.472 [2024-12-06 06:51:28.139239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:15.472 [2024-12-06 06:51:28.139250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.022 ms 00:25:15.472 [2024-12-06 06:51:28.139257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.472 [2024-12-06 06:51:28.152106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.472 [2024-12-06 06:51:28.152138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:15.472 [2024-12-06 06:51:28.152151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.795 ms 00:25:15.472 [2024-12-06 06:51:28.152159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.472 [2024-12-06 06:51:28.152530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.472 [2024-12-06 06:51:28.152545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:15.472 [2024-12-06 06:51:28.152556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.351 ms 00:25:15.472 [2024-12-06 06:51:28.152565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.472 [2024-12-06 06:51:28.189659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:15.472 [2024-12-06 06:51:28.189704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:15.472 [2024-12-06 06:51:28.189721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:15.472 [2024-12-06 06:51:28.189730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.472 [2024-12-06 06:51:28.189802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:15.472 [2024-12-06 06:51:28.189811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:15.472 [2024-12-06 06:51:28.189821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:15.472 [2024-12-06 06:51:28.189828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.472 [2024-12-06 06:51:28.189914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:15.472 [2024-12-06 06:51:28.189925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:15.472 [2024-12-06 06:51:28.189935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:15.472 [2024-12-06 06:51:28.189943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.472 [2024-12-06 06:51:28.189960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:15.472 [2024-12-06 06:51:28.189968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:15.472 [2024-12-06 06:51:28.189978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:15.472 [2024-12-06 06:51:28.189986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.732 [2024-12-06 06:51:28.271391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:15.732 [2024-12-06 06:51:28.271481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:15.732 [2024-12-06 06:51:28.271500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:15.732 [2024-12-06 06:51:28.271510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.732 [2024-12-06 06:51:28.337950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:15.732 [2024-12-06 06:51:28.338010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:15.732 [2024-12-06 06:51:28.338025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:15.732 [2024-12-06 06:51:28.338033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.732 [2024-12-06 06:51:28.338151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:15.732 [2024-12-06 06:51:28.338161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:15.732 [2024-12-06 06:51:28.338172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:15.732 [2024-12-06 06:51:28.338180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.732 [2024-12-06 06:51:28.338225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:15.732 [2024-12-06 06:51:28.338234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:15.732 [2024-12-06 06:51:28.338245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:15.732 [2024-12-06 06:51:28.338252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.732 [2024-12-06 06:51:28.338347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:15.732 [2024-12-06 06:51:28.338367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:15.732 [2024-12-06 06:51:28.338379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:15.732 [2024-12-06 06:51:28.338387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.732 [2024-12-06 06:51:28.338426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:15.732 [2024-12-06 06:51:28.338436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:15.732 [2024-12-06 06:51:28.338446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:15.732 [2024-12-06 06:51:28.338454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.732 [2024-12-06 06:51:28.338514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:15.732 [2024-12-06 06:51:28.338532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:15.732 [2024-12-06 06:51:28.338542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:15.732 [2024-12-06 06:51:28.338559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.732 [2024-12-06 06:51:28.338604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:15.732 [2024-12-06 06:51:28.338615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:15.732 [2024-12-06 06:51:28.338624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:15.732 [2024-12-06 06:51:28.338634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.732 [2024-12-06 06:51:28.338773] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 508.767 ms, result 0 00:25:15.732 true 00:25:15.732 06:51:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 76332 00:25:15.732 06:51:28 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 76332 ']' 00:25:15.732 06:51:28 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 76332 00:25:15.732 06:51:28 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:25:15.732 06:51:28 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:15.732 06:51:28 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76332 00:25:15.732 killing process with pid 76332 00:25:15.732 Received shutdown signal, test time was about 4.000000 seconds 00:25:15.732 00:25:15.732 Latency(us) 00:25:15.732 [2024-12-06T06:51:28.473Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:15.732 [2024-12-06T06:51:28.473Z] =================================================================================================================== 00:25:15.732 [2024-12-06T06:51:28.473Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:15.732 06:51:28 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:15.732 06:51:28 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:15.732 06:51:28 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76332' 00:25:15.732 06:51:28 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 76332 00:25:15.732 06:51:28 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 76332 00:25:27.945 Remove shared memory files 00:25:27.945 06:51:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:27.945 06:51:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:25:27.945 06:51:39 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:27.945 06:51:39 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:25:27.945 06:51:39 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:25:27.945 06:51:39 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:25:27.945 06:51:39 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:27.945 06:51:39 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:25:27.945 00:25:27.945 real 0m30.727s 00:25:27.945 user 0m33.314s 00:25:27.945 sys 0m0.935s 00:25:27.945 06:51:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:27.945 ************************************ 00:25:27.945 END TEST ftl_bdevperf 00:25:27.945 06:51:39 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:27.945 ************************************ 00:25:27.945 06:51:39 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:25:27.945 06:51:39 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:25:27.945 06:51:39 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:27.945 06:51:39 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:27.945 ************************************ 00:25:27.945 START TEST ftl_trim 00:25:27.945 ************************************ 00:25:27.945 06:51:39 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:25:27.945 * Looking for test storage... 00:25:27.945 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:27.945 06:51:39 ftl.ftl_trim -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:27.945 06:51:39 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lcov --version 00:25:27.945 06:51:39 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:27.945 06:51:39 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:27.945 06:51:39 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:27.945 06:51:39 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:27.945 06:51:39 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:27.945 06:51:39 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:25:27.945 06:51:39 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:25:27.945 06:51:39 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:25:27.945 06:51:39 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:25:27.945 06:51:39 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:25:27.945 06:51:39 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:25:27.945 06:51:39 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:25:27.945 06:51:39 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:27.945 06:51:39 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:25:27.945 06:51:39 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:25:27.945 06:51:39 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:27.945 06:51:39 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:27.945 06:51:39 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:25:27.945 06:51:39 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:25:27.945 06:51:39 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:27.945 06:51:39 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:25:27.945 06:51:39 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:25:27.945 06:51:39 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:25:27.945 06:51:39 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:25:27.945 06:51:39 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:27.945 06:51:39 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:25:27.945 06:51:39 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:25:27.945 06:51:39 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:27.945 06:51:39 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:27.945 06:51:39 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:25:27.945 06:51:39 ftl.ftl_trim -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:27.945 06:51:39 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:27.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:27.945 --rc genhtml_branch_coverage=1 00:25:27.945 --rc genhtml_function_coverage=1 00:25:27.945 --rc genhtml_legend=1 00:25:27.945 --rc geninfo_all_blocks=1 00:25:27.945 --rc geninfo_unexecuted_blocks=1 00:25:27.945 00:25:27.945 ' 00:25:27.945 06:51:39 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:27.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:27.945 --rc genhtml_branch_coverage=1 00:25:27.945 --rc genhtml_function_coverage=1 00:25:27.945 --rc genhtml_legend=1 00:25:27.945 --rc geninfo_all_blocks=1 00:25:27.945 --rc geninfo_unexecuted_blocks=1 00:25:27.945 00:25:27.945 ' 00:25:27.945 06:51:39 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:27.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:27.945 --rc genhtml_branch_coverage=1 00:25:27.945 --rc genhtml_function_coverage=1 00:25:27.945 --rc genhtml_legend=1 00:25:27.945 --rc geninfo_all_blocks=1 00:25:27.945 --rc geninfo_unexecuted_blocks=1 00:25:27.945 00:25:27.945 ' 00:25:27.945 06:51:39 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:27.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:27.945 --rc genhtml_branch_coverage=1 00:25:27.945 --rc genhtml_function_coverage=1 00:25:27.945 --rc genhtml_legend=1 00:25:27.945 --rc geninfo_all_blocks=1 00:25:27.945 --rc geninfo_unexecuted_blocks=1 00:25:27.945 00:25:27.945 ' 00:25:27.945 06:51:39 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:27.945 06:51:39 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:25:27.945 06:51:39 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:27.946 06:51:39 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:27.946 06:51:39 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:27.946 06:51:39 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:27.946 06:51:39 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:27.946 06:51:39 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:27.946 06:51:39 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:27.946 06:51:39 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:27.946 06:51:39 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:27.946 06:51:39 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:27.946 06:51:39 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:27.946 06:51:39 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:27.946 06:51:39 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:27.946 06:51:39 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:27.946 06:51:39 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:27.946 06:51:39 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:27.946 06:51:39 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:27.946 06:51:39 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:27.946 06:51:39 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:27.946 06:51:39 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:27.946 06:51:39 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:27.946 06:51:39 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:27.946 06:51:39 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:27.946 06:51:39 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:27.946 06:51:39 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:27.946 06:51:39 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:27.946 06:51:39 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:27.946 06:51:39 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:27.946 06:51:39 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:25:27.946 06:51:39 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:25:27.946 06:51:39 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:25:27.946 06:51:39 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:25:27.946 06:51:39 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:25:27.946 06:51:39 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:25:27.946 06:51:39 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:25:27.946 06:51:39 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:25:27.946 06:51:39 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:27.946 06:51:39 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:27.946 06:51:39 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:25:27.946 06:51:39 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=76669 00:25:27.946 06:51:39 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:25:27.946 06:51:39 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 76669 00:25:27.946 06:51:39 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76669 ']' 00:25:27.946 06:51:39 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:27.946 06:51:39 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:27.946 06:51:39 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:27.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:27.946 06:51:39 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:27.946 06:51:39 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:25:27.946 [2024-12-06 06:51:39.342648] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:25:27.946 [2024-12-06 06:51:39.342814] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76669 ] 00:25:27.946 [2024-12-06 06:51:39.515969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:27.946 [2024-12-06 06:51:39.617775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:27.946 [2024-12-06 06:51:39.618161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:27.946 [2024-12-06 06:51:39.618184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:27.946 06:51:40 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:27.946 06:51:40 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:25:27.946 06:51:40 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:25:27.946 06:51:40 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:25:27.946 06:51:40 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:27.946 06:51:40 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:25:27.946 06:51:40 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:25:27.946 06:51:40 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:25:27.946 06:51:40 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:25:27.946 06:51:40 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:25:27.946 06:51:40 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:25:27.946 06:51:40 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:25:27.946 06:51:40 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:27.946 06:51:40 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:25:27.946 06:51:40 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:25:27.946 06:51:40 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:25:28.205 06:51:40 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:28.205 { 00:25:28.205 "name": "nvme0n1", 00:25:28.205 "aliases": [ 00:25:28.205 "e898d969-9e35-470e-bdae-639d01467f21" 00:25:28.205 ], 00:25:28.205 "product_name": "NVMe disk", 00:25:28.205 "block_size": 4096, 00:25:28.205 "num_blocks": 1310720, 00:25:28.205 "uuid": "e898d969-9e35-470e-bdae-639d01467f21", 00:25:28.205 "numa_id": -1, 00:25:28.205 "assigned_rate_limits": { 00:25:28.205 "rw_ios_per_sec": 0, 00:25:28.205 "rw_mbytes_per_sec": 0, 00:25:28.205 "r_mbytes_per_sec": 0, 00:25:28.205 "w_mbytes_per_sec": 0 00:25:28.205 }, 00:25:28.205 "claimed": true, 00:25:28.205 "claim_type": "read_many_write_one", 00:25:28.205 "zoned": false, 00:25:28.205 "supported_io_types": { 00:25:28.205 "read": true, 00:25:28.205 "write": true, 00:25:28.205 "unmap": true, 00:25:28.205 "flush": true, 00:25:28.205 "reset": true, 00:25:28.205 "nvme_admin": true, 00:25:28.205 "nvme_io": true, 00:25:28.205 "nvme_io_md": false, 00:25:28.205 "write_zeroes": true, 00:25:28.205 "zcopy": false, 00:25:28.205 "get_zone_info": false, 00:25:28.205 "zone_management": false, 00:25:28.205 "zone_append": false, 00:25:28.205 "compare": true, 00:25:28.205 "compare_and_write": false, 00:25:28.205 "abort": true, 00:25:28.205 "seek_hole": false, 00:25:28.205 "seek_data": false, 00:25:28.205 "copy": true, 00:25:28.205 "nvme_iov_md": false 00:25:28.205 }, 00:25:28.205 "driver_specific": { 00:25:28.205 "nvme": [ 00:25:28.205 { 00:25:28.205 "pci_address": "0000:00:11.0", 00:25:28.205 "trid": { 00:25:28.205 "trtype": "PCIe", 00:25:28.205 "traddr": "0000:00:11.0" 00:25:28.205 }, 00:25:28.205 "ctrlr_data": { 00:25:28.205 "cntlid": 0, 00:25:28.205 "vendor_id": "0x1b36", 00:25:28.205 "model_number": "QEMU NVMe Ctrl", 00:25:28.205 "serial_number": "12341", 00:25:28.205 "firmware_revision": "8.0.0", 00:25:28.205 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:28.205 "oacs": { 00:25:28.205 "security": 0, 00:25:28.205 "format": 1, 00:25:28.205 "firmware": 0, 00:25:28.205 "ns_manage": 1 00:25:28.205 }, 00:25:28.205 "multi_ctrlr": false, 00:25:28.205 "ana_reporting": false 00:25:28.205 }, 00:25:28.205 "vs": { 00:25:28.205 "nvme_version": "1.4" 00:25:28.205 }, 00:25:28.205 "ns_data": { 00:25:28.205 "id": 1, 00:25:28.205 "can_share": false 00:25:28.205 } 00:25:28.205 } 00:25:28.205 ], 00:25:28.205 "mp_policy": "active_passive" 00:25:28.205 } 00:25:28.205 } 00:25:28.205 ]' 00:25:28.205 06:51:40 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:28.205 06:51:40 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:25:28.205 06:51:40 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:28.205 06:51:40 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:25:28.205 06:51:40 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:25:28.205 06:51:40 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:25:28.205 06:51:40 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:25:28.205 06:51:40 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:25:28.205 06:51:40 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:25:28.205 06:51:40 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:28.205 06:51:40 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:28.463 06:51:40 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=f9f06661-c6b8-45d0-946d-e7d1858d86fb 00:25:28.463 06:51:40 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:25:28.463 06:51:40 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f9f06661-c6b8-45d0-946d-e7d1858d86fb 00:25:28.721 06:51:41 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:25:28.721 06:51:41 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=0837f94a-1e86-49e8-8289-a00fc58d04db 00:25:28.721 06:51:41 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 0837f94a-1e86-49e8-8289-a00fc58d04db 00:25:29.049 06:51:41 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=c91541a8-e1cb-432c-8a9f-4f981b762cbf 00:25:29.049 06:51:41 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 c91541a8-e1cb-432c-8a9f-4f981b762cbf 00:25:29.049 06:51:41 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:25:29.049 06:51:41 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:25:29.049 06:51:41 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=c91541a8-e1cb-432c-8a9f-4f981b762cbf 00:25:29.049 06:51:41 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:25:29.049 06:51:41 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size c91541a8-e1cb-432c-8a9f-4f981b762cbf 00:25:29.049 06:51:41 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=c91541a8-e1cb-432c-8a9f-4f981b762cbf 00:25:29.049 06:51:41 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:29.049 06:51:41 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:25:29.049 06:51:41 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:25:29.049 06:51:41 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c91541a8-e1cb-432c-8a9f-4f981b762cbf 00:25:29.307 06:51:41 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:29.307 { 00:25:29.307 "name": "c91541a8-e1cb-432c-8a9f-4f981b762cbf", 00:25:29.307 "aliases": [ 00:25:29.307 "lvs/nvme0n1p0" 00:25:29.307 ], 00:25:29.307 "product_name": "Logical Volume", 00:25:29.307 "block_size": 4096, 00:25:29.307 "num_blocks": 26476544, 00:25:29.307 "uuid": "c91541a8-e1cb-432c-8a9f-4f981b762cbf", 00:25:29.307 "assigned_rate_limits": { 00:25:29.307 "rw_ios_per_sec": 0, 00:25:29.307 "rw_mbytes_per_sec": 0, 00:25:29.307 "r_mbytes_per_sec": 0, 00:25:29.307 "w_mbytes_per_sec": 0 00:25:29.307 }, 00:25:29.307 "claimed": false, 00:25:29.307 "zoned": false, 00:25:29.307 "supported_io_types": { 00:25:29.307 "read": true, 00:25:29.307 "write": true, 00:25:29.307 "unmap": true, 00:25:29.307 "flush": false, 00:25:29.307 "reset": true, 00:25:29.307 "nvme_admin": false, 00:25:29.307 "nvme_io": false, 00:25:29.307 "nvme_io_md": false, 00:25:29.307 "write_zeroes": true, 00:25:29.307 "zcopy": false, 00:25:29.307 "get_zone_info": false, 00:25:29.307 "zone_management": false, 00:25:29.307 "zone_append": false, 00:25:29.307 "compare": false, 00:25:29.307 "compare_and_write": false, 00:25:29.307 "abort": false, 00:25:29.307 "seek_hole": true, 00:25:29.307 "seek_data": true, 00:25:29.307 "copy": false, 00:25:29.307 "nvme_iov_md": false 00:25:29.307 }, 00:25:29.307 "driver_specific": { 00:25:29.307 "lvol": { 00:25:29.307 "lvol_store_uuid": "0837f94a-1e86-49e8-8289-a00fc58d04db", 00:25:29.307 "base_bdev": "nvme0n1", 00:25:29.307 "thin_provision": true, 00:25:29.307 "num_allocated_clusters": 0, 00:25:29.307 "snapshot": false, 00:25:29.307 "clone": false, 00:25:29.307 "esnap_clone": false 00:25:29.307 } 00:25:29.307 } 00:25:29.307 } 00:25:29.307 ]' 00:25:29.307 06:51:41 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:29.307 06:51:41 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:25:29.307 06:51:41 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:29.307 06:51:41 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:29.307 06:51:41 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:29.307 06:51:41 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:25:29.307 06:51:41 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:25:29.307 06:51:41 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:25:29.307 06:51:41 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:25:29.566 06:51:42 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:25:29.566 06:51:42 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:25:29.566 06:51:42 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size c91541a8-e1cb-432c-8a9f-4f981b762cbf 00:25:29.566 06:51:42 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=c91541a8-e1cb-432c-8a9f-4f981b762cbf 00:25:29.567 06:51:42 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:29.567 06:51:42 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:25:29.567 06:51:42 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:25:29.567 06:51:42 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c91541a8-e1cb-432c-8a9f-4f981b762cbf 00:25:29.825 06:51:42 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:29.825 { 00:25:29.825 "name": "c91541a8-e1cb-432c-8a9f-4f981b762cbf", 00:25:29.825 "aliases": [ 00:25:29.825 "lvs/nvme0n1p0" 00:25:29.825 ], 00:25:29.825 "product_name": "Logical Volume", 00:25:29.825 "block_size": 4096, 00:25:29.825 "num_blocks": 26476544, 00:25:29.825 "uuid": "c91541a8-e1cb-432c-8a9f-4f981b762cbf", 00:25:29.825 "assigned_rate_limits": { 00:25:29.825 "rw_ios_per_sec": 0, 00:25:29.825 "rw_mbytes_per_sec": 0, 00:25:29.825 "r_mbytes_per_sec": 0, 00:25:29.825 "w_mbytes_per_sec": 0 00:25:29.825 }, 00:25:29.825 "claimed": false, 00:25:29.825 "zoned": false, 00:25:29.825 "supported_io_types": { 00:25:29.825 "read": true, 00:25:29.825 "write": true, 00:25:29.825 "unmap": true, 00:25:29.825 "flush": false, 00:25:29.825 "reset": true, 00:25:29.825 "nvme_admin": false, 00:25:29.825 "nvme_io": false, 00:25:29.825 "nvme_io_md": false, 00:25:29.825 "write_zeroes": true, 00:25:29.825 "zcopy": false, 00:25:29.825 "get_zone_info": false, 00:25:29.825 "zone_management": false, 00:25:29.825 "zone_append": false, 00:25:29.825 "compare": false, 00:25:29.825 "compare_and_write": false, 00:25:29.825 "abort": false, 00:25:29.825 "seek_hole": true, 00:25:29.825 "seek_data": true, 00:25:29.825 "copy": false, 00:25:29.825 "nvme_iov_md": false 00:25:29.825 }, 00:25:29.825 "driver_specific": { 00:25:29.825 "lvol": { 00:25:29.825 "lvol_store_uuid": "0837f94a-1e86-49e8-8289-a00fc58d04db", 00:25:29.825 "base_bdev": "nvme0n1", 00:25:29.825 "thin_provision": true, 00:25:29.825 "num_allocated_clusters": 0, 00:25:29.825 "snapshot": false, 00:25:29.825 "clone": false, 00:25:29.825 "esnap_clone": false 00:25:29.825 } 00:25:29.825 } 00:25:29.825 } 00:25:29.825 ]' 00:25:29.825 06:51:42 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:29.825 06:51:42 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:25:29.825 06:51:42 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:29.825 06:51:42 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:29.825 06:51:42 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:29.825 06:51:42 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:25:29.825 06:51:42 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:25:29.825 06:51:42 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:25:30.084 06:51:42 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:25:30.084 06:51:42 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:25:30.084 06:51:42 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size c91541a8-e1cb-432c-8a9f-4f981b762cbf 00:25:30.084 06:51:42 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=c91541a8-e1cb-432c-8a9f-4f981b762cbf 00:25:30.084 06:51:42 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:30.084 06:51:42 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:25:30.084 06:51:42 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:25:30.084 06:51:42 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c91541a8-e1cb-432c-8a9f-4f981b762cbf 00:25:30.341 06:51:42 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:30.341 { 00:25:30.341 "name": "c91541a8-e1cb-432c-8a9f-4f981b762cbf", 00:25:30.341 "aliases": [ 00:25:30.341 "lvs/nvme0n1p0" 00:25:30.341 ], 00:25:30.341 "product_name": "Logical Volume", 00:25:30.341 "block_size": 4096, 00:25:30.341 "num_blocks": 26476544, 00:25:30.341 "uuid": "c91541a8-e1cb-432c-8a9f-4f981b762cbf", 00:25:30.341 "assigned_rate_limits": { 00:25:30.341 "rw_ios_per_sec": 0, 00:25:30.341 "rw_mbytes_per_sec": 0, 00:25:30.341 "r_mbytes_per_sec": 0, 00:25:30.341 "w_mbytes_per_sec": 0 00:25:30.341 }, 00:25:30.341 "claimed": false, 00:25:30.341 "zoned": false, 00:25:30.341 "supported_io_types": { 00:25:30.341 "read": true, 00:25:30.341 "write": true, 00:25:30.341 "unmap": true, 00:25:30.341 "flush": false, 00:25:30.341 "reset": true, 00:25:30.341 "nvme_admin": false, 00:25:30.341 "nvme_io": false, 00:25:30.341 "nvme_io_md": false, 00:25:30.341 "write_zeroes": true, 00:25:30.341 "zcopy": false, 00:25:30.341 "get_zone_info": false, 00:25:30.341 "zone_management": false, 00:25:30.341 "zone_append": false, 00:25:30.341 "compare": false, 00:25:30.341 "compare_and_write": false, 00:25:30.341 "abort": false, 00:25:30.341 "seek_hole": true, 00:25:30.341 "seek_data": true, 00:25:30.341 "copy": false, 00:25:30.341 "nvme_iov_md": false 00:25:30.341 }, 00:25:30.341 "driver_specific": { 00:25:30.341 "lvol": { 00:25:30.341 "lvol_store_uuid": "0837f94a-1e86-49e8-8289-a00fc58d04db", 00:25:30.341 "base_bdev": "nvme0n1", 00:25:30.341 "thin_provision": true, 00:25:30.341 "num_allocated_clusters": 0, 00:25:30.341 "snapshot": false, 00:25:30.341 "clone": false, 00:25:30.341 "esnap_clone": false 00:25:30.341 } 00:25:30.341 } 00:25:30.341 } 00:25:30.341 ]' 00:25:30.341 06:51:42 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:30.341 06:51:42 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:25:30.341 06:51:42 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:30.341 06:51:42 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:30.341 06:51:42 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:30.341 06:51:42 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:25:30.341 06:51:42 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:25:30.341 06:51:42 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d c91541a8-e1cb-432c-8a9f-4f981b762cbf -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:25:30.600 [2024-12-06 06:51:43.104421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.600 [2024-12-06 06:51:43.104475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:30.600 [2024-12-06 06:51:43.104489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:30.600 [2024-12-06 06:51:43.104495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.600 [2024-12-06 06:51:43.106699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.600 [2024-12-06 06:51:43.106731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:30.600 [2024-12-06 06:51:43.106740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.183 ms 00:25:30.600 [2024-12-06 06:51:43.106746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.600 [2024-12-06 06:51:43.106819] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:30.600 [2024-12-06 06:51:43.107336] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:30.600 [2024-12-06 06:51:43.107360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.600 [2024-12-06 06:51:43.107366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:30.600 [2024-12-06 06:51:43.107374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.547 ms 00:25:30.600 [2024-12-06 06:51:43.107380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.600 [2024-12-06 06:51:43.107487] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 5ee066fa-e7bc-4a33-b1a8-f35f9ed69a0f 00:25:30.600 [2024-12-06 06:51:43.108432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.600 [2024-12-06 06:51:43.108459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:25:30.600 [2024-12-06 06:51:43.108476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:25:30.600 [2024-12-06 06:51:43.108484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.600 [2024-12-06 06:51:43.113445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.600 [2024-12-06 06:51:43.113479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:30.600 [2024-12-06 06:51:43.113488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.910 ms 00:25:30.600 [2024-12-06 06:51:43.113496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.600 [2024-12-06 06:51:43.113596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.600 [2024-12-06 06:51:43.113606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:30.600 [2024-12-06 06:51:43.113612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:25:30.600 [2024-12-06 06:51:43.113622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.600 [2024-12-06 06:51:43.113649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.600 [2024-12-06 06:51:43.113658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:30.600 [2024-12-06 06:51:43.113664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:30.600 [2024-12-06 06:51:43.113672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.600 [2024-12-06 06:51:43.113696] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:25:30.600 [2024-12-06 06:51:43.116586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.600 [2024-12-06 06:51:43.116612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:30.600 [2024-12-06 06:51:43.116622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.893 ms 00:25:30.600 [2024-12-06 06:51:43.116628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.600 [2024-12-06 06:51:43.116673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.600 [2024-12-06 06:51:43.116691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:30.600 [2024-12-06 06:51:43.116699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:30.600 [2024-12-06 06:51:43.116705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.600 [2024-12-06 06:51:43.116731] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:25:30.600 [2024-12-06 06:51:43.116842] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:30.600 [2024-12-06 06:51:43.116859] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:30.600 [2024-12-06 06:51:43.116868] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:30.600 [2024-12-06 06:51:43.116878] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:30.600 [2024-12-06 06:51:43.116885] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:30.600 [2024-12-06 06:51:43.116893] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:25:30.600 [2024-12-06 06:51:43.116899] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:30.600 [2024-12-06 06:51:43.116907] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:30.600 [2024-12-06 06:51:43.116914] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:30.600 [2024-12-06 06:51:43.116921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.600 [2024-12-06 06:51:43.116927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:30.600 [2024-12-06 06:51:43.116934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.191 ms 00:25:30.600 [2024-12-06 06:51:43.116939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.600 [2024-12-06 06:51:43.117015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.600 [2024-12-06 06:51:43.117027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:30.600 [2024-12-06 06:51:43.117035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:25:30.600 [2024-12-06 06:51:43.117040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.600 [2024-12-06 06:51:43.117134] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:30.600 [2024-12-06 06:51:43.117142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:30.600 [2024-12-06 06:51:43.117150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:30.600 [2024-12-06 06:51:43.117155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:30.600 [2024-12-06 06:51:43.117162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:30.600 [2024-12-06 06:51:43.117168] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:30.600 [2024-12-06 06:51:43.117174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:25:30.600 [2024-12-06 06:51:43.117180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:30.600 [2024-12-06 06:51:43.117186] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:25:30.600 [2024-12-06 06:51:43.117191] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:30.600 [2024-12-06 06:51:43.117198] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:30.600 [2024-12-06 06:51:43.117203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:25:30.600 [2024-12-06 06:51:43.117210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:30.600 [2024-12-06 06:51:43.117215] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:30.600 [2024-12-06 06:51:43.117222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:25:30.600 [2024-12-06 06:51:43.117227] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:30.600 [2024-12-06 06:51:43.117236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:30.600 [2024-12-06 06:51:43.117241] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:25:30.600 [2024-12-06 06:51:43.117247] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:30.600 [2024-12-06 06:51:43.117252] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:30.600 [2024-12-06 06:51:43.117259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:25:30.600 [2024-12-06 06:51:43.117264] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:30.600 [2024-12-06 06:51:43.117270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:30.600 [2024-12-06 06:51:43.117277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:25:30.600 [2024-12-06 06:51:43.117283] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:30.600 [2024-12-06 06:51:43.117288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:30.600 [2024-12-06 06:51:43.117295] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:25:30.600 [2024-12-06 06:51:43.117300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:30.600 [2024-12-06 06:51:43.117306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:30.600 [2024-12-06 06:51:43.117311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:25:30.600 [2024-12-06 06:51:43.117317] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:30.600 [2024-12-06 06:51:43.117322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:30.600 [2024-12-06 06:51:43.117330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:25:30.600 [2024-12-06 06:51:43.117335] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:30.600 [2024-12-06 06:51:43.117342] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:30.600 [2024-12-06 06:51:43.117347] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:25:30.601 [2024-12-06 06:51:43.117353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:30.601 [2024-12-06 06:51:43.117358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:30.601 [2024-12-06 06:51:43.117365] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:25:30.601 [2024-12-06 06:51:43.117370] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:30.601 [2024-12-06 06:51:43.117376] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:30.601 [2024-12-06 06:51:43.117381] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:25:30.601 [2024-12-06 06:51:43.117387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:30.601 [2024-12-06 06:51:43.117392] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:30.601 [2024-12-06 06:51:43.117400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:30.601 [2024-12-06 06:51:43.117405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:30.601 [2024-12-06 06:51:43.117412] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:30.601 [2024-12-06 06:51:43.117418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:30.601 [2024-12-06 06:51:43.117425] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:30.601 [2024-12-06 06:51:43.117430] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:30.601 [2024-12-06 06:51:43.117437] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:30.601 [2024-12-06 06:51:43.117442] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:30.601 [2024-12-06 06:51:43.117449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:30.601 [2024-12-06 06:51:43.117455] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:30.601 [2024-12-06 06:51:43.117484] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:30.601 [2024-12-06 06:51:43.117493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:25:30.601 [2024-12-06 06:51:43.117500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:25:30.601 [2024-12-06 06:51:43.117506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:25:30.601 [2024-12-06 06:51:43.117513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:25:30.601 [2024-12-06 06:51:43.117519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:25:30.601 [2024-12-06 06:51:43.117525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:25:30.601 [2024-12-06 06:51:43.117531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:25:30.601 [2024-12-06 06:51:43.117538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:25:30.601 [2024-12-06 06:51:43.117543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:25:30.601 [2024-12-06 06:51:43.117553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:25:30.601 [2024-12-06 06:51:43.117559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:25:30.601 [2024-12-06 06:51:43.117566] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:25:30.601 [2024-12-06 06:51:43.117571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:25:30.601 [2024-12-06 06:51:43.117578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:25:30.601 [2024-12-06 06:51:43.117584] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:30.601 [2024-12-06 06:51:43.117593] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:30.601 [2024-12-06 06:51:43.117599] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:30.601 [2024-12-06 06:51:43.117606] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:30.601 [2024-12-06 06:51:43.117612] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:30.601 [2024-12-06 06:51:43.117619] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:30.601 [2024-12-06 06:51:43.117625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.601 [2024-12-06 06:51:43.117632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:30.601 [2024-12-06 06:51:43.117638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.546 ms 00:25:30.601 [2024-12-06 06:51:43.117645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.601 [2024-12-06 06:51:43.117709] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:25:30.601 [2024-12-06 06:51:43.117720] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:25:33.123 [2024-12-06 06:51:45.531569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.123 [2024-12-06 06:51:45.531629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:25:33.123 [2024-12-06 06:51:45.531643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2413.850 ms 00:25:33.123 [2024-12-06 06:51:45.531653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.123 [2024-12-06 06:51:45.556856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.123 [2024-12-06 06:51:45.556953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:33.123 [2024-12-06 06:51:45.556965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.964 ms 00:25:33.123 [2024-12-06 06:51:45.556974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.123 [2024-12-06 06:51:45.557129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.123 [2024-12-06 06:51:45.557141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:33.123 [2024-12-06 06:51:45.557165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:25:33.123 [2024-12-06 06:51:45.557176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.123 [2024-12-06 06:51:45.601263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.123 [2024-12-06 06:51:45.601319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:33.123 [2024-12-06 06:51:45.601334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.057 ms 00:25:33.123 [2024-12-06 06:51:45.601346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.123 [2024-12-06 06:51:45.601441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.123 [2024-12-06 06:51:45.601454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:33.123 [2024-12-06 06:51:45.601475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:33.123 [2024-12-06 06:51:45.601485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.123 [2024-12-06 06:51:45.601816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.123 [2024-12-06 06:51:45.601845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:33.123 [2024-12-06 06:51:45.601854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:25:33.123 [2024-12-06 06:51:45.601863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.123 [2024-12-06 06:51:45.601994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.123 [2024-12-06 06:51:45.602010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:33.123 [2024-12-06 06:51:45.602030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:25:33.123 [2024-12-06 06:51:45.602041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.123 [2024-12-06 06:51:45.616442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.123 [2024-12-06 06:51:45.616494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:33.123 [2024-12-06 06:51:45.616505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.372 ms 00:25:33.123 [2024-12-06 06:51:45.616515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.123 [2024-12-06 06:51:45.627844] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:25:33.123 [2024-12-06 06:51:45.642238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.123 [2024-12-06 06:51:45.642281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:33.123 [2024-12-06 06:51:45.642296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.618 ms 00:25:33.123 [2024-12-06 06:51:45.642305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.123 [2024-12-06 06:51:45.709797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.123 [2024-12-06 06:51:45.709857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:25:33.123 [2024-12-06 06:51:45.709874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.408 ms 00:25:33.123 [2024-12-06 06:51:45.709882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.123 [2024-12-06 06:51:45.710083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.123 [2024-12-06 06:51:45.710094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:33.123 [2024-12-06 06:51:45.710107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:25:33.123 [2024-12-06 06:51:45.710115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.123 [2024-12-06 06:51:45.733140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.123 [2024-12-06 06:51:45.733186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:25:33.123 [2024-12-06 06:51:45.733200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.989 ms 00:25:33.123 [2024-12-06 06:51:45.733211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.123 [2024-12-06 06:51:45.756159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.123 [2024-12-06 06:51:45.756201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:25:33.123 [2024-12-06 06:51:45.756215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.894 ms 00:25:33.123 [2024-12-06 06:51:45.756223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.123 [2024-12-06 06:51:45.756812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.123 [2024-12-06 06:51:45.756835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:33.123 [2024-12-06 06:51:45.756845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.547 ms 00:25:33.123 [2024-12-06 06:51:45.756853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.123 [2024-12-06 06:51:45.826824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.123 [2024-12-06 06:51:45.826882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:25:33.123 [2024-12-06 06:51:45.826901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.933 ms 00:25:33.123 [2024-12-06 06:51:45.826910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.123 [2024-12-06 06:51:45.851879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.123 [2024-12-06 06:51:45.851931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:25:33.123 [2024-12-06 06:51:45.851945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.859 ms 00:25:33.123 [2024-12-06 06:51:45.851956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.380 [2024-12-06 06:51:45.876285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.380 [2024-12-06 06:51:45.876332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:25:33.380 [2024-12-06 06:51:45.876346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.261 ms 00:25:33.380 [2024-12-06 06:51:45.876354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.380 [2024-12-06 06:51:45.899590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.380 [2024-12-06 06:51:45.899649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:33.380 [2024-12-06 06:51:45.899663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.154 ms 00:25:33.380 [2024-12-06 06:51:45.899671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.380 [2024-12-06 06:51:45.899732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.380 [2024-12-06 06:51:45.899743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:33.380 [2024-12-06 06:51:45.899756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:33.380 [2024-12-06 06:51:45.899763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.380 [2024-12-06 06:51:45.899833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.380 [2024-12-06 06:51:45.899842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:33.380 [2024-12-06 06:51:45.899852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:25:33.380 [2024-12-06 06:51:45.899859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.380 [2024-12-06 06:51:45.900644] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:33.380 [2024-12-06 06:51:45.903683] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2795.941 ms, result 0 00:25:33.380 [2024-12-06 06:51:45.904306] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:33.380 { 00:25:33.380 "name": "ftl0", 00:25:33.380 "uuid": "5ee066fa-e7bc-4a33-b1a8-f35f9ed69a0f" 00:25:33.380 } 00:25:33.380 06:51:45 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:25:33.380 06:51:45 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:25:33.380 06:51:45 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:33.380 06:51:45 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:25:33.380 06:51:45 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:33.380 06:51:45 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:33.380 06:51:45 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:33.637 06:51:46 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:25:33.637 [ 00:25:33.637 { 00:25:33.637 "name": "ftl0", 00:25:33.637 "aliases": [ 00:25:33.637 "5ee066fa-e7bc-4a33-b1a8-f35f9ed69a0f" 00:25:33.637 ], 00:25:33.637 "product_name": "FTL disk", 00:25:33.637 "block_size": 4096, 00:25:33.637 "num_blocks": 23592960, 00:25:33.637 "uuid": "5ee066fa-e7bc-4a33-b1a8-f35f9ed69a0f", 00:25:33.637 "assigned_rate_limits": { 00:25:33.637 "rw_ios_per_sec": 0, 00:25:33.637 "rw_mbytes_per_sec": 0, 00:25:33.637 "r_mbytes_per_sec": 0, 00:25:33.637 "w_mbytes_per_sec": 0 00:25:33.637 }, 00:25:33.637 "claimed": false, 00:25:33.637 "zoned": false, 00:25:33.637 "supported_io_types": { 00:25:33.637 "read": true, 00:25:33.637 "write": true, 00:25:33.637 "unmap": true, 00:25:33.637 "flush": true, 00:25:33.637 "reset": false, 00:25:33.637 "nvme_admin": false, 00:25:33.637 "nvme_io": false, 00:25:33.637 "nvme_io_md": false, 00:25:33.637 "write_zeroes": true, 00:25:33.637 "zcopy": false, 00:25:33.637 "get_zone_info": false, 00:25:33.637 "zone_management": false, 00:25:33.637 "zone_append": false, 00:25:33.637 "compare": false, 00:25:33.637 "compare_and_write": false, 00:25:33.637 "abort": false, 00:25:33.637 "seek_hole": false, 00:25:33.637 "seek_data": false, 00:25:33.637 "copy": false, 00:25:33.637 "nvme_iov_md": false 00:25:33.637 }, 00:25:33.637 "driver_specific": { 00:25:33.637 "ftl": { 00:25:33.637 "base_bdev": "c91541a8-e1cb-432c-8a9f-4f981b762cbf", 00:25:33.637 "cache": "nvc0n1p0" 00:25:33.637 } 00:25:33.637 } 00:25:33.637 } 00:25:33.637 ] 00:25:33.637 06:51:46 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:25:33.637 06:51:46 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:25:33.637 06:51:46 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:25:33.894 06:51:46 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:25:33.894 06:51:46 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:25:34.181 06:51:46 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:25:34.181 { 00:25:34.181 "name": "ftl0", 00:25:34.181 "aliases": [ 00:25:34.181 "5ee066fa-e7bc-4a33-b1a8-f35f9ed69a0f" 00:25:34.181 ], 00:25:34.181 "product_name": "FTL disk", 00:25:34.181 "block_size": 4096, 00:25:34.181 "num_blocks": 23592960, 00:25:34.182 "uuid": "5ee066fa-e7bc-4a33-b1a8-f35f9ed69a0f", 00:25:34.182 "assigned_rate_limits": { 00:25:34.182 "rw_ios_per_sec": 0, 00:25:34.182 "rw_mbytes_per_sec": 0, 00:25:34.182 "r_mbytes_per_sec": 0, 00:25:34.182 "w_mbytes_per_sec": 0 00:25:34.182 }, 00:25:34.182 "claimed": false, 00:25:34.182 "zoned": false, 00:25:34.182 "supported_io_types": { 00:25:34.182 "read": true, 00:25:34.182 "write": true, 00:25:34.182 "unmap": true, 00:25:34.182 "flush": true, 00:25:34.182 "reset": false, 00:25:34.182 "nvme_admin": false, 00:25:34.182 "nvme_io": false, 00:25:34.182 "nvme_io_md": false, 00:25:34.182 "write_zeroes": true, 00:25:34.182 "zcopy": false, 00:25:34.182 "get_zone_info": false, 00:25:34.182 "zone_management": false, 00:25:34.182 "zone_append": false, 00:25:34.182 "compare": false, 00:25:34.182 "compare_and_write": false, 00:25:34.182 "abort": false, 00:25:34.182 "seek_hole": false, 00:25:34.182 "seek_data": false, 00:25:34.182 "copy": false, 00:25:34.182 "nvme_iov_md": false 00:25:34.182 }, 00:25:34.182 "driver_specific": { 00:25:34.182 "ftl": { 00:25:34.182 "base_bdev": "c91541a8-e1cb-432c-8a9f-4f981b762cbf", 00:25:34.182 "cache": "nvc0n1p0" 00:25:34.182 } 00:25:34.182 } 00:25:34.182 } 00:25:34.182 ]' 00:25:34.182 06:51:46 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:25:34.182 06:51:46 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:25:34.182 06:51:46 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:25:34.182 [2024-12-06 06:51:46.919657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.182 [2024-12-06 06:51:46.919716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:34.182 [2024-12-06 06:51:46.919732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:34.182 [2024-12-06 06:51:46.919743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.182 [2024-12-06 06:51:46.919773] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:25:34.443 [2024-12-06 06:51:46.922363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.443 [2024-12-06 06:51:46.922397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:34.443 [2024-12-06 06:51:46.922415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.571 ms 00:25:34.443 [2024-12-06 06:51:46.922424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.443 [2024-12-06 06:51:46.922882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.443 [2024-12-06 06:51:46.922902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:34.443 [2024-12-06 06:51:46.922913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.421 ms 00:25:34.443 [2024-12-06 06:51:46.922921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.443 [2024-12-06 06:51:46.926576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.443 [2024-12-06 06:51:46.926596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:34.443 [2024-12-06 06:51:46.926606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.625 ms 00:25:34.443 [2024-12-06 06:51:46.926613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.443 [2024-12-06 06:51:46.933718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.443 [2024-12-06 06:51:46.933749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:34.443 [2024-12-06 06:51:46.933761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.052 ms 00:25:34.443 [2024-12-06 06:51:46.933768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.443 [2024-12-06 06:51:46.957586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.443 [2024-12-06 06:51:46.957630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:34.443 [2024-12-06 06:51:46.957647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.748 ms 00:25:34.443 [2024-12-06 06:51:46.957655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.443 [2024-12-06 06:51:46.972397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.443 [2024-12-06 06:51:46.972440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:34.443 [2024-12-06 06:51:46.972456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.673 ms 00:25:34.443 [2024-12-06 06:51:46.972471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.443 [2024-12-06 06:51:46.972701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.443 [2024-12-06 06:51:46.972722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:34.443 [2024-12-06 06:51:46.972733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:25:34.443 [2024-12-06 06:51:46.972741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.443 [2024-12-06 06:51:46.995338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.443 [2024-12-06 06:51:46.995373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:34.443 [2024-12-06 06:51:46.995386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.564 ms 00:25:34.443 [2024-12-06 06:51:46.995394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.443 [2024-12-06 06:51:47.017592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.443 [2024-12-06 06:51:47.017633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:34.443 [2024-12-06 06:51:47.017648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.129 ms 00:25:34.443 [2024-12-06 06:51:47.017655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.443 [2024-12-06 06:51:47.039636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.443 [2024-12-06 06:51:47.039677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:34.443 [2024-12-06 06:51:47.039689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.916 ms 00:25:34.443 [2024-12-06 06:51:47.039697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.443 [2024-12-06 06:51:47.061537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.443 [2024-12-06 06:51:47.061582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:34.443 [2024-12-06 06:51:47.061595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.731 ms 00:25:34.443 [2024-12-06 06:51:47.061602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.443 [2024-12-06 06:51:47.061662] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:34.443 [2024-12-06 06:51:47.061678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:34.443 [2024-12-06 06:51:47.061689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:34.443 [2024-12-06 06:51:47.061697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.061707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.061714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.061726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.061734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.061743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.061750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.061760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.061768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.061777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.061785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.061794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.061801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.061810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.061817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.061826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.061833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.061860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.061867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.061878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.061885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.061894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.061902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.061911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.061919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.061928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.061935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.061945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.061953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.061962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.061969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.061979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.061986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.061995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.062007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.062018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.062026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.062034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.062042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.062051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.062058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.062067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.062074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.062084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.062092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.062101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.062108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.062117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.062125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.062134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.062141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.062152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.062159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.062168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.062175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.062184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.062192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.062200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.062208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.062218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.062225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.062234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.062242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.062250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.062258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.062267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.062276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.062287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.062294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.062305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.062313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.062321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.062329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.062337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.062345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.062354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.062361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.062370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.062377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.062387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:34.444 [2024-12-06 06:51:47.062394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:34.445 [2024-12-06 06:51:47.062403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:34.445 [2024-12-06 06:51:47.062410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:34.445 [2024-12-06 06:51:47.062420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:34.445 [2024-12-06 06:51:47.062427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:34.445 [2024-12-06 06:51:47.062436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:34.445 [2024-12-06 06:51:47.062444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:34.445 [2024-12-06 06:51:47.062453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:34.445 [2024-12-06 06:51:47.062460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:34.445 [2024-12-06 06:51:47.062480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:34.445 [2024-12-06 06:51:47.062487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:34.445 [2024-12-06 06:51:47.062497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:34.445 [2024-12-06 06:51:47.062504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:34.445 [2024-12-06 06:51:47.062513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:34.445 [2024-12-06 06:51:47.062520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:34.445 [2024-12-06 06:51:47.062529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:34.445 [2024-12-06 06:51:47.062537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:34.445 [2024-12-06 06:51:47.062548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:34.445 [2024-12-06 06:51:47.062567] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:34.445 [2024-12-06 06:51:47.062579] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5ee066fa-e7bc-4a33-b1a8-f35f9ed69a0f 00:25:34.445 [2024-12-06 06:51:47.062588] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:34.445 [2024-12-06 06:51:47.062596] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:34.445 [2024-12-06 06:51:47.062605] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:34.445 [2024-12-06 06:51:47.062614] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:34.445 [2024-12-06 06:51:47.062622] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:34.445 [2024-12-06 06:51:47.062631] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:34.445 [2024-12-06 06:51:47.062638] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:34.445 [2024-12-06 06:51:47.062646] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:34.445 [2024-12-06 06:51:47.062653] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:34.445 [2024-12-06 06:51:47.062661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.445 [2024-12-06 06:51:47.062669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:34.445 [2024-12-06 06:51:47.062679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.001 ms 00:25:34.445 [2024-12-06 06:51:47.062686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.445 [2024-12-06 06:51:47.075000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.445 [2024-12-06 06:51:47.075041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:34.445 [2024-12-06 06:51:47.075057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.280 ms 00:25:34.445 [2024-12-06 06:51:47.075065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.445 [2024-12-06 06:51:47.075499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.445 [2024-12-06 06:51:47.075525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:34.445 [2024-12-06 06:51:47.075536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.347 ms 00:25:34.445 [2024-12-06 06:51:47.075543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.445 [2024-12-06 06:51:47.119174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:34.445 [2024-12-06 06:51:47.119223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:34.445 [2024-12-06 06:51:47.119236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:34.445 [2024-12-06 06:51:47.119244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.445 [2024-12-06 06:51:47.119356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:34.445 [2024-12-06 06:51:47.119365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:34.445 [2024-12-06 06:51:47.119374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:34.445 [2024-12-06 06:51:47.119381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.445 [2024-12-06 06:51:47.119450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:34.445 [2024-12-06 06:51:47.119473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:34.445 [2024-12-06 06:51:47.119485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:34.445 [2024-12-06 06:51:47.119493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.445 [2024-12-06 06:51:47.119524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:34.445 [2024-12-06 06:51:47.119538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:34.445 [2024-12-06 06:51:47.119548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:34.445 [2024-12-06 06:51:47.119555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.706 [2024-12-06 06:51:47.201635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:34.706 [2024-12-06 06:51:47.201687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:34.706 [2024-12-06 06:51:47.201701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:34.706 [2024-12-06 06:51:47.201709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.706 [2024-12-06 06:51:47.265392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:34.706 [2024-12-06 06:51:47.265444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:34.706 [2024-12-06 06:51:47.265458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:34.706 [2024-12-06 06:51:47.265498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.706 [2024-12-06 06:51:47.265593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:34.706 [2024-12-06 06:51:47.265603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:34.706 [2024-12-06 06:51:47.265619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:34.706 [2024-12-06 06:51:47.265626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.706 [2024-12-06 06:51:47.265670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:34.706 [2024-12-06 06:51:47.265677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:34.706 [2024-12-06 06:51:47.265687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:34.706 [2024-12-06 06:51:47.265694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.706 [2024-12-06 06:51:47.265802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:34.706 [2024-12-06 06:51:47.265811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:34.706 [2024-12-06 06:51:47.265821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:34.706 [2024-12-06 06:51:47.265830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.706 [2024-12-06 06:51:47.265878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:34.706 [2024-12-06 06:51:47.265886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:34.706 [2024-12-06 06:51:47.265896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:34.706 [2024-12-06 06:51:47.265903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.706 [2024-12-06 06:51:47.265951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:34.706 [2024-12-06 06:51:47.265960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:34.706 [2024-12-06 06:51:47.265970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:34.706 [2024-12-06 06:51:47.265980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.706 [2024-12-06 06:51:47.266033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:34.706 [2024-12-06 06:51:47.266043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:34.706 [2024-12-06 06:51:47.266052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:34.706 [2024-12-06 06:51:47.266059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.706 [2024-12-06 06:51:47.266238] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 346.564 ms, result 0 00:25:34.706 true 00:25:34.706 06:51:47 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 76669 00:25:34.706 06:51:47 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76669 ']' 00:25:34.706 06:51:47 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76669 00:25:34.706 06:51:47 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:25:34.706 06:51:47 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:34.706 06:51:47 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76669 00:25:34.706 06:51:47 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:34.706 06:51:47 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:34.706 killing process with pid 76669 00:25:34.706 06:51:47 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76669' 00:25:34.706 06:51:47 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76669 00:25:34.706 06:51:47 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76669 00:25:42.813 06:51:54 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:25:42.813 65536+0 records in 00:25:42.813 65536+0 records out 00:25:42.813 268435456 bytes (268 MB, 256 MiB) copied, 1.06886 s, 251 MB/s 00:25:42.813 06:51:55 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:42.813 [2024-12-06 06:51:55.472576] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:25:42.813 [2024-12-06 06:51:55.472712] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76853 ] 00:25:43.070 [2024-12-06 06:51:55.631621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:43.070 [2024-12-06 06:51:55.731511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:43.328 [2024-12-06 06:51:55.988159] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:43.328 [2024-12-06 06:51:55.988228] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:43.588 [2024-12-06 06:51:56.142620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.588 [2024-12-06 06:51:56.142679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:43.588 [2024-12-06 06:51:56.142693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:43.588 [2024-12-06 06:51:56.142701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.588 [2024-12-06 06:51:56.145378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.589 [2024-12-06 06:51:56.145416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:43.589 [2024-12-06 06:51:56.145426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.659 ms 00:25:43.589 [2024-12-06 06:51:56.145433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.589 [2024-12-06 06:51:56.145579] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:43.589 [2024-12-06 06:51:56.146233] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:43.589 [2024-12-06 06:51:56.146259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.589 [2024-12-06 06:51:56.146268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:43.589 [2024-12-06 06:51:56.146277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.688 ms 00:25:43.589 [2024-12-06 06:51:56.146284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.589 [2024-12-06 06:51:56.147385] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:43.589 [2024-12-06 06:51:56.159537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.589 [2024-12-06 06:51:56.159570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:43.589 [2024-12-06 06:51:56.159582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.154 ms 00:25:43.589 [2024-12-06 06:51:56.159590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.589 [2024-12-06 06:51:56.159686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.589 [2024-12-06 06:51:56.159697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:43.589 [2024-12-06 06:51:56.159706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:25:43.589 [2024-12-06 06:51:56.159713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.589 [2024-12-06 06:51:56.164530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.589 [2024-12-06 06:51:56.164559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:43.589 [2024-12-06 06:51:56.164568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.776 ms 00:25:43.589 [2024-12-06 06:51:56.164575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.589 [2024-12-06 06:51:56.164660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.589 [2024-12-06 06:51:56.164670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:43.589 [2024-12-06 06:51:56.164678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:25:43.589 [2024-12-06 06:51:56.164686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.589 [2024-12-06 06:51:56.164713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.589 [2024-12-06 06:51:56.164721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:43.589 [2024-12-06 06:51:56.164733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:43.589 [2024-12-06 06:51:56.164740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.589 [2024-12-06 06:51:56.164760] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:25:43.589 [2024-12-06 06:51:56.167913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.589 [2024-12-06 06:51:56.167941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:43.589 [2024-12-06 06:51:56.167950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.157 ms 00:25:43.589 [2024-12-06 06:51:56.167957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.589 [2024-12-06 06:51:56.167993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.589 [2024-12-06 06:51:56.168002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:43.589 [2024-12-06 06:51:56.168010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:43.589 [2024-12-06 06:51:56.168017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.589 [2024-12-06 06:51:56.168036] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:43.589 [2024-12-06 06:51:56.168054] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:43.589 [2024-12-06 06:51:56.168088] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:43.589 [2024-12-06 06:51:56.168103] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:43.589 [2024-12-06 06:51:56.168204] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:43.589 [2024-12-06 06:51:56.168215] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:43.589 [2024-12-06 06:51:56.168225] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:43.589 [2024-12-06 06:51:56.168237] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:43.589 [2024-12-06 06:51:56.168246] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:43.589 [2024-12-06 06:51:56.168253] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:25:43.589 [2024-12-06 06:51:56.168261] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:43.589 [2024-12-06 06:51:56.168268] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:43.589 [2024-12-06 06:51:56.168275] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:43.589 [2024-12-06 06:51:56.168282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.589 [2024-12-06 06:51:56.168289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:43.589 [2024-12-06 06:51:56.168297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:25:43.589 [2024-12-06 06:51:56.168304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.589 [2024-12-06 06:51:56.168391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.589 [2024-12-06 06:51:56.168402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:43.589 [2024-12-06 06:51:56.168410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:25:43.589 [2024-12-06 06:51:56.168416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.589 [2024-12-06 06:51:56.168532] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:43.589 [2024-12-06 06:51:56.168543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:43.589 [2024-12-06 06:51:56.168551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:43.589 [2024-12-06 06:51:56.168559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:43.589 [2024-12-06 06:51:56.168566] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:43.589 [2024-12-06 06:51:56.168573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:43.589 [2024-12-06 06:51:56.168579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:25:43.589 [2024-12-06 06:51:56.168587] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:43.589 [2024-12-06 06:51:56.168599] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:25:43.589 [2024-12-06 06:51:56.168605] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:43.589 [2024-12-06 06:51:56.168612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:43.589 [2024-12-06 06:51:56.168624] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:25:43.589 [2024-12-06 06:51:56.168631] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:43.589 [2024-12-06 06:51:56.168637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:43.589 [2024-12-06 06:51:56.168644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:25:43.589 [2024-12-06 06:51:56.168650] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:43.589 [2024-12-06 06:51:56.168656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:43.589 [2024-12-06 06:51:56.168662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:25:43.589 [2024-12-06 06:51:56.168669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:43.589 [2024-12-06 06:51:56.168676] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:43.589 [2024-12-06 06:51:56.168685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:25:43.589 [2024-12-06 06:51:56.168691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:43.589 [2024-12-06 06:51:56.168697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:43.589 [2024-12-06 06:51:56.168704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:25:43.589 [2024-12-06 06:51:56.168710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:43.589 [2024-12-06 06:51:56.168716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:43.589 [2024-12-06 06:51:56.168723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:25:43.589 [2024-12-06 06:51:56.168730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:43.589 [2024-12-06 06:51:56.168736] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:43.589 [2024-12-06 06:51:56.168743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:25:43.589 [2024-12-06 06:51:56.168749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:43.589 [2024-12-06 06:51:56.168755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:43.589 [2024-12-06 06:51:56.168761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:25:43.589 [2024-12-06 06:51:56.168768] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:43.589 [2024-12-06 06:51:56.168775] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:43.589 [2024-12-06 06:51:56.168781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:25:43.589 [2024-12-06 06:51:56.168787] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:43.589 [2024-12-06 06:51:56.168794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:43.589 [2024-12-06 06:51:56.168800] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:25:43.589 [2024-12-06 06:51:56.168807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:43.589 [2024-12-06 06:51:56.168813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:43.589 [2024-12-06 06:51:56.168819] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:25:43.590 [2024-12-06 06:51:56.168825] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:43.590 [2024-12-06 06:51:56.168831] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:43.590 [2024-12-06 06:51:56.168839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:43.590 [2024-12-06 06:51:56.168847] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:43.590 [2024-12-06 06:51:56.168854] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:43.590 [2024-12-06 06:51:56.168862] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:43.590 [2024-12-06 06:51:56.168868] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:43.590 [2024-12-06 06:51:56.168875] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:43.590 [2024-12-06 06:51:56.168881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:43.590 [2024-12-06 06:51:56.168887] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:43.590 [2024-12-06 06:51:56.168894] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:43.590 [2024-12-06 06:51:56.168903] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:43.590 [2024-12-06 06:51:56.168916] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:43.590 [2024-12-06 06:51:56.168925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:25:43.590 [2024-12-06 06:51:56.168932] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:25:43.590 [2024-12-06 06:51:56.168939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:25:43.590 [2024-12-06 06:51:56.168946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:25:43.590 [2024-12-06 06:51:56.168953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:25:43.590 [2024-12-06 06:51:56.168960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:25:43.590 [2024-12-06 06:51:56.168966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:25:43.590 [2024-12-06 06:51:56.168973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:25:43.590 [2024-12-06 06:51:56.168980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:25:43.590 [2024-12-06 06:51:56.168987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:25:43.590 [2024-12-06 06:51:56.168994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:25:43.590 [2024-12-06 06:51:56.169001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:25:43.590 [2024-12-06 06:51:56.169008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:25:43.590 [2024-12-06 06:51:56.169015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:25:43.590 [2024-12-06 06:51:56.169022] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:43.590 [2024-12-06 06:51:56.169030] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:43.590 [2024-12-06 06:51:56.169038] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:43.590 [2024-12-06 06:51:56.169044] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:43.590 [2024-12-06 06:51:56.169051] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:43.590 [2024-12-06 06:51:56.169059] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:43.590 [2024-12-06 06:51:56.169066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.590 [2024-12-06 06:51:56.169081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:43.590 [2024-12-06 06:51:56.169089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.619 ms 00:25:43.590 [2024-12-06 06:51:56.169096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.590 [2024-12-06 06:51:56.194645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.590 [2024-12-06 06:51:56.194684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:43.590 [2024-12-06 06:51:56.194695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.482 ms 00:25:43.590 [2024-12-06 06:51:56.194703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.590 [2024-12-06 06:51:56.194829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.590 [2024-12-06 06:51:56.194839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:43.590 [2024-12-06 06:51:56.194847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:25:43.590 [2024-12-06 06:51:56.194855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.590 [2024-12-06 06:51:56.234379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.590 [2024-12-06 06:51:56.234419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:43.590 [2024-12-06 06:51:56.234434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.502 ms 00:25:43.590 [2024-12-06 06:51:56.234442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.590 [2024-12-06 06:51:56.234549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.590 [2024-12-06 06:51:56.234561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:43.590 [2024-12-06 06:51:56.234571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:43.590 [2024-12-06 06:51:56.234578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.590 [2024-12-06 06:51:56.234901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.590 [2024-12-06 06:51:56.234921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:43.590 [2024-12-06 06:51:56.234936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:25:43.590 [2024-12-06 06:51:56.234944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.590 [2024-12-06 06:51:56.235068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.590 [2024-12-06 06:51:56.235085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:43.590 [2024-12-06 06:51:56.235094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:25:43.590 [2024-12-06 06:51:56.235100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.590 [2024-12-06 06:51:56.248263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.590 [2024-12-06 06:51:56.248295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:43.590 [2024-12-06 06:51:56.248305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.142 ms 00:25:43.590 [2024-12-06 06:51:56.248312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.590 [2024-12-06 06:51:56.260661] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:25:43.590 [2024-12-06 06:51:56.260698] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:43.590 [2024-12-06 06:51:56.260710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.590 [2024-12-06 06:51:56.260717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:43.590 [2024-12-06 06:51:56.260726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.305 ms 00:25:43.590 [2024-12-06 06:51:56.260733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.590 [2024-12-06 06:51:56.284767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.590 [2024-12-06 06:51:56.284817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:43.590 [2024-12-06 06:51:56.284829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.958 ms 00:25:43.590 [2024-12-06 06:51:56.284837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.590 [2024-12-06 06:51:56.296478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.590 [2024-12-06 06:51:56.296510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:43.590 [2024-12-06 06:51:56.296520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.569 ms 00:25:43.590 [2024-12-06 06:51:56.296527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.590 [2024-12-06 06:51:56.307711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.590 [2024-12-06 06:51:56.307842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:43.590 [2024-12-06 06:51:56.307859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.122 ms 00:25:43.590 [2024-12-06 06:51:56.307868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.590 [2024-12-06 06:51:56.308490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.590 [2024-12-06 06:51:56.308510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:43.590 [2024-12-06 06:51:56.308519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.532 ms 00:25:43.590 [2024-12-06 06:51:56.308526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.849 [2024-12-06 06:51:56.364425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.849 [2024-12-06 06:51:56.364507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:43.849 [2024-12-06 06:51:56.364522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.873 ms 00:25:43.849 [2024-12-06 06:51:56.364530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.849 [2024-12-06 06:51:56.375763] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:25:43.849 [2024-12-06 06:51:56.390915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.849 [2024-12-06 06:51:56.391096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:43.849 [2024-12-06 06:51:56.391115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.240 ms 00:25:43.849 [2024-12-06 06:51:56.391124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.849 [2024-12-06 06:51:56.391234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.849 [2024-12-06 06:51:56.391245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:43.849 [2024-12-06 06:51:56.391254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:43.849 [2024-12-06 06:51:56.391261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.849 [2024-12-06 06:51:56.391312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.849 [2024-12-06 06:51:56.391321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:43.849 [2024-12-06 06:51:56.391328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:25:43.849 [2024-12-06 06:51:56.391336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.849 [2024-12-06 06:51:56.391371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.849 [2024-12-06 06:51:56.391387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:43.849 [2024-12-06 06:51:56.391410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:43.849 [2024-12-06 06:51:56.391418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.849 [2024-12-06 06:51:56.391451] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:43.849 [2024-12-06 06:51:56.391486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.849 [2024-12-06 06:51:56.391494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:43.849 [2024-12-06 06:51:56.391502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:25:43.849 [2024-12-06 06:51:56.391509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.849 [2024-12-06 06:51:56.416216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.849 [2024-12-06 06:51:56.416260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:43.849 [2024-12-06 06:51:56.416273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.685 ms 00:25:43.849 [2024-12-06 06:51:56.416281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.849 [2024-12-06 06:51:56.416375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.849 [2024-12-06 06:51:56.416386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:43.849 [2024-12-06 06:51:56.416395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:25:43.849 [2024-12-06 06:51:56.416402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.849 [2024-12-06 06:51:56.417317] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:43.849 [2024-12-06 06:51:56.420376] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 274.419 ms, result 0 00:25:43.849 [2024-12-06 06:51:56.421078] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:43.849 [2024-12-06 06:51:56.434168] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:44.783  [2024-12-06T06:51:58.456Z] Copying: 40/256 [MB] (40 MBps) [2024-12-06T06:51:59.830Z] Copying: 76/256 [MB] (36 MBps) [2024-12-06T06:52:00.763Z] Copying: 117/256 [MB] (41 MBps) [2024-12-06T06:52:01.697Z] Copying: 156/256 [MB] (39 MBps) [2024-12-06T06:52:02.637Z] Copying: 197/256 [MB] (40 MBps) [2024-12-06T06:52:03.202Z] Copying: 232/256 [MB] (35 MBps) [2024-12-06T06:52:03.202Z] Copying: 256/256 [MB] (average 38 MBps)[2024-12-06 06:52:03.048419] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:50.461 [2024-12-06 06:52:03.057654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.461 [2024-12-06 06:52:03.057807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:50.461 [2024-12-06 06:52:03.057826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:50.461 [2024-12-06 06:52:03.057841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.461 [2024-12-06 06:52:03.057867] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:25:50.461 [2024-12-06 06:52:03.060507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.461 [2024-12-06 06:52:03.060535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:50.461 [2024-12-06 06:52:03.060546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.626 ms 00:25:50.461 [2024-12-06 06:52:03.060553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.461 [2024-12-06 06:52:03.062442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.461 [2024-12-06 06:52:03.062494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:50.461 [2024-12-06 06:52:03.062509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.866 ms 00:25:50.461 [2024-12-06 06:52:03.062520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.461 [2024-12-06 06:52:03.069586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.461 [2024-12-06 06:52:03.069621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:50.461 [2024-12-06 06:52:03.069631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.045 ms 00:25:50.461 [2024-12-06 06:52:03.069638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.462 [2024-12-06 06:52:03.076605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.462 [2024-12-06 06:52:03.076633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:50.462 [2024-12-06 06:52:03.076643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.905 ms 00:25:50.462 [2024-12-06 06:52:03.076652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.462 [2024-12-06 06:52:03.100003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.462 [2024-12-06 06:52:03.100132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:50.462 [2024-12-06 06:52:03.100148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.309 ms 00:25:50.462 [2024-12-06 06:52:03.100155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.462 [2024-12-06 06:52:03.114631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.462 [2024-12-06 06:52:03.114758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:50.462 [2024-12-06 06:52:03.114778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.445 ms 00:25:50.462 [2024-12-06 06:52:03.114786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.462 [2024-12-06 06:52:03.114917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.462 [2024-12-06 06:52:03.114928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:50.462 [2024-12-06 06:52:03.114937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:25:50.462 [2024-12-06 06:52:03.114951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.462 [2024-12-06 06:52:03.138942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.462 [2024-12-06 06:52:03.138975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:50.462 [2024-12-06 06:52:03.138986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.975 ms 00:25:50.462 [2024-12-06 06:52:03.138994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.462 [2024-12-06 06:52:03.162349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.462 [2024-12-06 06:52:03.162516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:50.462 [2024-12-06 06:52:03.162536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.321 ms 00:25:50.462 [2024-12-06 06:52:03.162544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.462 [2024-12-06 06:52:03.185291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.462 [2024-12-06 06:52:03.185324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:50.462 [2024-12-06 06:52:03.185336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.713 ms 00:25:50.462 [2024-12-06 06:52:03.185344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.721 [2024-12-06 06:52:03.207799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.721 [2024-12-06 06:52:03.207829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:50.721 [2024-12-06 06:52:03.207839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.396 ms 00:25:50.721 [2024-12-06 06:52:03.207847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.721 [2024-12-06 06:52:03.207879] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:50.721 [2024-12-06 06:52:03.207893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:50.721 [2024-12-06 06:52:03.207904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:50.721 [2024-12-06 06:52:03.207912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:50.721 [2024-12-06 06:52:03.207919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:50.721 [2024-12-06 06:52:03.207927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:50.721 [2024-12-06 06:52:03.207934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:50.721 [2024-12-06 06:52:03.207942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:50.721 [2024-12-06 06:52:03.207949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:50.721 [2024-12-06 06:52:03.207957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:50.721 [2024-12-06 06:52:03.207965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:50.721 [2024-12-06 06:52:03.207973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:50.721 [2024-12-06 06:52:03.207980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:50.721 [2024-12-06 06:52:03.207987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:50.721 [2024-12-06 06:52:03.207994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:50.721 [2024-12-06 06:52:03.208002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:50.721 [2024-12-06 06:52:03.208009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:50.721 [2024-12-06 06:52:03.208017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:50.721 [2024-12-06 06:52:03.208024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:50.721 [2024-12-06 06:52:03.208031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:50.721 [2024-12-06 06:52:03.208038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:50.721 [2024-12-06 06:52:03.208046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:50.721 [2024-12-06 06:52:03.208053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:50.721 [2024-12-06 06:52:03.208060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:50.721 [2024-12-06 06:52:03.208067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:50.721 [2024-12-06 06:52:03.208074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:50.721 [2024-12-06 06:52:03.208082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:50.721 [2024-12-06 06:52:03.208090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:50.721 [2024-12-06 06:52:03.208097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:50.721 [2024-12-06 06:52:03.208105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:50.721 [2024-12-06 06:52:03.208112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:50.721 [2024-12-06 06:52:03.208120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:50.721 [2024-12-06 06:52:03.208127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:50.721 [2024-12-06 06:52:03.208134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:50.721 [2024-12-06 06:52:03.208141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:50.721 [2024-12-06 06:52:03.208148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:50.721 [2024-12-06 06:52:03.208156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:50.721 [2024-12-06 06:52:03.208163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:50.721 [2024-12-06 06:52:03.208170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:50.721 [2024-12-06 06:52:03.208177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:50.721 [2024-12-06 06:52:03.208184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:50.721 [2024-12-06 06:52:03.208192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:50.721 [2024-12-06 06:52:03.208199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:50.721 [2024-12-06 06:52:03.208207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:50.721 [2024-12-06 06:52:03.208214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:50.721 [2024-12-06 06:52:03.208221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:50.721 [2024-12-06 06:52:03.208228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:50.721 [2024-12-06 06:52:03.208235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:50.721 [2024-12-06 06:52:03.208242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:50.721 [2024-12-06 06:52:03.208250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:50.721 [2024-12-06 06:52:03.208257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:50.721 [2024-12-06 06:52:03.208264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:50.721 [2024-12-06 06:52:03.208271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:50.721 [2024-12-06 06:52:03.208278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:50.721 [2024-12-06 06:52:03.208285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:50.721 [2024-12-06 06:52:03.208293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:50.721 [2024-12-06 06:52:03.208301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:50.721 [2024-12-06 06:52:03.208309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:50.722 [2024-12-06 06:52:03.208316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:50.722 [2024-12-06 06:52:03.208323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:50.722 [2024-12-06 06:52:03.208331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:50.722 [2024-12-06 06:52:03.208338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:50.722 [2024-12-06 06:52:03.208347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:50.722 [2024-12-06 06:52:03.208355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:50.722 [2024-12-06 06:52:03.208362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:50.722 [2024-12-06 06:52:03.208369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:50.722 [2024-12-06 06:52:03.208376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:50.722 [2024-12-06 06:52:03.208384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:50.722 [2024-12-06 06:52:03.208391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:50.722 [2024-12-06 06:52:03.208398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:50.722 [2024-12-06 06:52:03.208405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:50.722 [2024-12-06 06:52:03.208413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:50.722 [2024-12-06 06:52:03.208421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:50.722 [2024-12-06 06:52:03.208428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:50.722 [2024-12-06 06:52:03.208435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:50.722 [2024-12-06 06:52:03.208443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:50.722 [2024-12-06 06:52:03.208450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:50.722 [2024-12-06 06:52:03.208456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:50.722 [2024-12-06 06:52:03.208482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:50.722 [2024-12-06 06:52:03.208494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:50.722 [2024-12-06 06:52:03.208505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:50.722 [2024-12-06 06:52:03.208517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:50.722 [2024-12-06 06:52:03.208530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:50.722 [2024-12-06 06:52:03.208539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:50.722 [2024-12-06 06:52:03.208546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:50.722 [2024-12-06 06:52:03.208554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:50.722 [2024-12-06 06:52:03.208561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:50.722 [2024-12-06 06:52:03.208569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:50.722 [2024-12-06 06:52:03.208576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:50.722 [2024-12-06 06:52:03.208584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:50.722 [2024-12-06 06:52:03.208592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:50.722 [2024-12-06 06:52:03.208599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:50.722 [2024-12-06 06:52:03.208606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:50.722 [2024-12-06 06:52:03.208613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:50.722 [2024-12-06 06:52:03.208625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:50.722 [2024-12-06 06:52:03.208641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:50.722 [2024-12-06 06:52:03.208648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:50.722 [2024-12-06 06:52:03.208656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:50.722 [2024-12-06 06:52:03.208663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:50.722 [2024-12-06 06:52:03.208671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:50.722 [2024-12-06 06:52:03.208679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:50.722 [2024-12-06 06:52:03.208695] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:50.722 [2024-12-06 06:52:03.208702] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5ee066fa-e7bc-4a33-b1a8-f35f9ed69a0f 00:25:50.722 [2024-12-06 06:52:03.208711] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:50.722 [2024-12-06 06:52:03.208718] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:50.722 [2024-12-06 06:52:03.208725] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:50.722 [2024-12-06 06:52:03.208733] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:50.722 [2024-12-06 06:52:03.208740] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:50.722 [2024-12-06 06:52:03.208747] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:50.722 [2024-12-06 06:52:03.208754] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:50.722 [2024-12-06 06:52:03.208761] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:50.722 [2024-12-06 06:52:03.208767] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:50.722 [2024-12-06 06:52:03.208774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.722 [2024-12-06 06:52:03.208784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:50.722 [2024-12-06 06:52:03.208792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.896 ms 00:25:50.722 [2024-12-06 06:52:03.208799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.722 [2024-12-06 06:52:03.221003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.722 [2024-12-06 06:52:03.221031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:50.722 [2024-12-06 06:52:03.221040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.186 ms 00:25:50.722 [2024-12-06 06:52:03.221048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.722 [2024-12-06 06:52:03.221399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.722 [2024-12-06 06:52:03.221408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:50.722 [2024-12-06 06:52:03.221416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:25:50.722 [2024-12-06 06:52:03.221423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.722 [2024-12-06 06:52:03.256490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:50.722 [2024-12-06 06:52:03.256532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:50.722 [2024-12-06 06:52:03.256543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:50.722 [2024-12-06 06:52:03.256551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.722 [2024-12-06 06:52:03.256635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:50.722 [2024-12-06 06:52:03.256643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:50.722 [2024-12-06 06:52:03.256651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:50.722 [2024-12-06 06:52:03.256659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.722 [2024-12-06 06:52:03.256703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:50.722 [2024-12-06 06:52:03.256712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:50.722 [2024-12-06 06:52:03.256719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:50.723 [2024-12-06 06:52:03.256726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.723 [2024-12-06 06:52:03.256743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:50.723 [2024-12-06 06:52:03.256754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:50.723 [2024-12-06 06:52:03.256761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:50.723 [2024-12-06 06:52:03.256768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.723 [2024-12-06 06:52:03.333421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:50.723 [2024-12-06 06:52:03.333499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:50.723 [2024-12-06 06:52:03.333517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:50.723 [2024-12-06 06:52:03.333530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.723 [2024-12-06 06:52:03.396056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:50.723 [2024-12-06 06:52:03.396103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:50.723 [2024-12-06 06:52:03.396113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:50.723 [2024-12-06 06:52:03.396121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.723 [2024-12-06 06:52:03.396174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:50.723 [2024-12-06 06:52:03.396183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:50.723 [2024-12-06 06:52:03.396191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:50.723 [2024-12-06 06:52:03.396198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.723 [2024-12-06 06:52:03.396226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:50.723 [2024-12-06 06:52:03.396234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:50.723 [2024-12-06 06:52:03.396245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:50.723 [2024-12-06 06:52:03.396252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.723 [2024-12-06 06:52:03.396338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:50.723 [2024-12-06 06:52:03.396347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:50.723 [2024-12-06 06:52:03.396356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:50.723 [2024-12-06 06:52:03.396363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.723 [2024-12-06 06:52:03.396392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:50.723 [2024-12-06 06:52:03.396401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:50.723 [2024-12-06 06:52:03.396408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:50.723 [2024-12-06 06:52:03.396418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.723 [2024-12-06 06:52:03.396451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:50.723 [2024-12-06 06:52:03.396460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:50.723 [2024-12-06 06:52:03.396498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:50.723 [2024-12-06 06:52:03.396509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.723 [2024-12-06 06:52:03.396558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:50.723 [2024-12-06 06:52:03.396569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:50.723 [2024-12-06 06:52:03.396580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:50.723 [2024-12-06 06:52:03.396587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.723 [2024-12-06 06:52:03.396717] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 339.057 ms, result 0 00:25:51.659 00:25:51.659 00:25:51.660 06:52:04 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=76950 00:25:51.660 06:52:04 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 76950 00:25:51.660 06:52:04 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:25:51.660 06:52:04 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76950 ']' 00:25:51.660 06:52:04 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:51.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:51.660 06:52:04 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:51.660 06:52:04 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:51.660 06:52:04 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:51.660 06:52:04 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:25:51.918 [2024-12-06 06:52:04.427655] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:25:51.918 [2024-12-06 06:52:04.428242] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76950 ] 00:25:51.918 [2024-12-06 06:52:04.584899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:52.176 [2024-12-06 06:52:04.684373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:52.745 06:52:05 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:52.746 06:52:05 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:25:52.746 06:52:05 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:25:52.746 [2024-12-06 06:52:05.449753] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:52.746 [2024-12-06 06:52:05.449977] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:53.006 [2024-12-06 06:52:05.622671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.006 [2024-12-06 06:52:05.622726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:53.006 [2024-12-06 06:52:05.622743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:53.006 [2024-12-06 06:52:05.622753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.006 [2024-12-06 06:52:05.625435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.006 [2024-12-06 06:52:05.625591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:53.006 [2024-12-06 06:52:05.625612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.662 ms 00:25:53.006 [2024-12-06 06:52:05.625621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.006 [2024-12-06 06:52:05.625692] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:53.006 [2024-12-06 06:52:05.626419] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:53.006 [2024-12-06 06:52:05.626455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.006 [2024-12-06 06:52:05.626476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:53.006 [2024-12-06 06:52:05.626488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.773 ms 00:25:53.006 [2024-12-06 06:52:05.626497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.006 [2024-12-06 06:52:05.627723] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:53.006 [2024-12-06 06:52:05.640194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.006 [2024-12-06 06:52:05.640230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:53.006 [2024-12-06 06:52:05.640243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.475 ms 00:25:53.006 [2024-12-06 06:52:05.640253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.006 [2024-12-06 06:52:05.640336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.006 [2024-12-06 06:52:05.640348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:53.006 [2024-12-06 06:52:05.640357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:25:53.006 [2024-12-06 06:52:05.640365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.006 [2024-12-06 06:52:05.645399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.006 [2024-12-06 06:52:05.645434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:53.006 [2024-12-06 06:52:05.645444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.986 ms 00:25:53.006 [2024-12-06 06:52:05.645454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.006 [2024-12-06 06:52:05.645568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.006 [2024-12-06 06:52:05.645581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:53.006 [2024-12-06 06:52:05.645589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:25:53.006 [2024-12-06 06:52:05.645601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.006 [2024-12-06 06:52:05.645627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.006 [2024-12-06 06:52:05.645637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:53.006 [2024-12-06 06:52:05.645644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:53.006 [2024-12-06 06:52:05.645653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.006 [2024-12-06 06:52:05.645676] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:25:53.006 [2024-12-06 06:52:05.648887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.006 [2024-12-06 06:52:05.648912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:53.006 [2024-12-06 06:52:05.648924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.215 ms 00:25:53.006 [2024-12-06 06:52:05.648933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.006 [2024-12-06 06:52:05.648971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.006 [2024-12-06 06:52:05.648980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:53.006 [2024-12-06 06:52:05.648991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:53.006 [2024-12-06 06:52:05.649001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.006 [2024-12-06 06:52:05.649023] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:53.006 [2024-12-06 06:52:05.649041] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:53.006 [2024-12-06 06:52:05.649087] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:53.006 [2024-12-06 06:52:05.649103] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:53.006 [2024-12-06 06:52:05.649208] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:53.006 [2024-12-06 06:52:05.649220] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:53.006 [2024-12-06 06:52:05.649235] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:53.006 [2024-12-06 06:52:05.649246] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:53.006 [2024-12-06 06:52:05.649258] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:53.006 [2024-12-06 06:52:05.649267] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:25:53.006 [2024-12-06 06:52:05.649276] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:53.006 [2024-12-06 06:52:05.649284] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:53.006 [2024-12-06 06:52:05.649296] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:53.006 [2024-12-06 06:52:05.649304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.006 [2024-12-06 06:52:05.649314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:53.006 [2024-12-06 06:52:05.649323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:25:53.006 [2024-12-06 06:52:05.649332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.006 [2024-12-06 06:52:05.649421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.006 [2024-12-06 06:52:05.649432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:53.006 [2024-12-06 06:52:05.649441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:25:53.006 [2024-12-06 06:52:05.649450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.006 [2024-12-06 06:52:05.649580] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:53.006 [2024-12-06 06:52:05.649594] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:53.006 [2024-12-06 06:52:05.649604] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:53.006 [2024-12-06 06:52:05.649614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:53.006 [2024-12-06 06:52:05.649623] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:53.006 [2024-12-06 06:52:05.649634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:53.006 [2024-12-06 06:52:05.649642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:25:53.006 [2024-12-06 06:52:05.649654] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:53.006 [2024-12-06 06:52:05.649662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:25:53.006 [2024-12-06 06:52:05.649672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:53.006 [2024-12-06 06:52:05.649680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:53.006 [2024-12-06 06:52:05.649689] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:25:53.006 [2024-12-06 06:52:05.649697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:53.006 [2024-12-06 06:52:05.649706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:53.006 [2024-12-06 06:52:05.649714] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:25:53.006 [2024-12-06 06:52:05.649723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:53.006 [2024-12-06 06:52:05.649731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:53.006 [2024-12-06 06:52:05.649742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:25:53.006 [2024-12-06 06:52:05.649755] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:53.006 [2024-12-06 06:52:05.649765] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:53.006 [2024-12-06 06:52:05.649773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:25:53.006 [2024-12-06 06:52:05.649782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:53.006 [2024-12-06 06:52:05.649789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:53.006 [2024-12-06 06:52:05.649800] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:25:53.006 [2024-12-06 06:52:05.649808] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:53.006 [2024-12-06 06:52:05.649820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:53.006 [2024-12-06 06:52:05.649828] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:25:53.006 [2024-12-06 06:52:05.649837] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:53.006 [2024-12-06 06:52:05.649844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:53.006 [2024-12-06 06:52:05.649855] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:25:53.006 [2024-12-06 06:52:05.649863] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:53.006 [2024-12-06 06:52:05.649872] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:53.006 [2024-12-06 06:52:05.649879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:25:53.006 [2024-12-06 06:52:05.649888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:53.006 [2024-12-06 06:52:05.649896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:53.006 [2024-12-06 06:52:05.649905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:25:53.006 [2024-12-06 06:52:05.649912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:53.006 [2024-12-06 06:52:05.649921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:53.006 [2024-12-06 06:52:05.649929] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:25:53.006 [2024-12-06 06:52:05.649939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:53.006 [2024-12-06 06:52:05.649947] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:53.006 [2024-12-06 06:52:05.649956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:25:53.006 [2024-12-06 06:52:05.649964] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:53.006 [2024-12-06 06:52:05.649973] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:53.006 [2024-12-06 06:52:05.649983] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:53.006 [2024-12-06 06:52:05.649993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:53.006 [2024-12-06 06:52:05.650002] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:53.006 [2024-12-06 06:52:05.650011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:53.006 [2024-12-06 06:52:05.650019] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:53.006 [2024-12-06 06:52:05.650029] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:53.006 [2024-12-06 06:52:05.650037] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:53.006 [2024-12-06 06:52:05.650046] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:53.006 [2024-12-06 06:52:05.650054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:53.006 [2024-12-06 06:52:05.650065] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:53.006 [2024-12-06 06:52:05.650075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:53.006 [2024-12-06 06:52:05.650089] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:25:53.006 [2024-12-06 06:52:05.650098] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:25:53.006 [2024-12-06 06:52:05.650107] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:25:53.006 [2024-12-06 06:52:05.650116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:25:53.006 [2024-12-06 06:52:05.650125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:25:53.006 [2024-12-06 06:52:05.650133] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:25:53.006 [2024-12-06 06:52:05.650143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:25:53.007 [2024-12-06 06:52:05.650151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:25:53.007 [2024-12-06 06:52:05.650161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:25:53.007 [2024-12-06 06:52:05.650169] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:25:53.007 [2024-12-06 06:52:05.650178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:25:53.007 [2024-12-06 06:52:05.650185] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:25:53.007 [2024-12-06 06:52:05.650193] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:25:53.007 [2024-12-06 06:52:05.650200] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:25:53.007 [2024-12-06 06:52:05.650208] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:53.007 [2024-12-06 06:52:05.650216] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:53.007 [2024-12-06 06:52:05.650228] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:53.007 [2024-12-06 06:52:05.650235] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:53.007 [2024-12-06 06:52:05.650243] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:53.007 [2024-12-06 06:52:05.650250] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:53.007 [2024-12-06 06:52:05.650259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.007 [2024-12-06 06:52:05.650266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:53.007 [2024-12-06 06:52:05.650275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.746 ms 00:25:53.007 [2024-12-06 06:52:05.650283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.007 [2024-12-06 06:52:05.676391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.007 [2024-12-06 06:52:05.676547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:53.007 [2024-12-06 06:52:05.676567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.037 ms 00:25:53.007 [2024-12-06 06:52:05.676578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.007 [2024-12-06 06:52:05.676701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.007 [2024-12-06 06:52:05.676710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:53.007 [2024-12-06 06:52:05.676720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:25:53.007 [2024-12-06 06:52:05.676728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.007 [2024-12-06 06:52:05.706915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.007 [2024-12-06 06:52:05.706950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:53.007 [2024-12-06 06:52:05.706962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.164 ms 00:25:53.007 [2024-12-06 06:52:05.706969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.007 [2024-12-06 06:52:05.707033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.007 [2024-12-06 06:52:05.707042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:53.007 [2024-12-06 06:52:05.707052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:53.007 [2024-12-06 06:52:05.707060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.007 [2024-12-06 06:52:05.707372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.007 [2024-12-06 06:52:05.707386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:53.007 [2024-12-06 06:52:05.707418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:25:53.007 [2024-12-06 06:52:05.707426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.007 [2024-12-06 06:52:05.707584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.007 [2024-12-06 06:52:05.707593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:53.007 [2024-12-06 06:52:05.707603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:25:53.007 [2024-12-06 06:52:05.707611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.007 [2024-12-06 06:52:05.721908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.007 [2024-12-06 06:52:05.721935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:53.007 [2024-12-06 06:52:05.721948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.275 ms 00:25:53.007 [2024-12-06 06:52:05.721955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.265 [2024-12-06 06:52:05.760144] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:25:53.265 [2024-12-06 06:52:05.760187] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:53.265 [2024-12-06 06:52:05.760204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.265 [2024-12-06 06:52:05.760212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:53.265 [2024-12-06 06:52:05.760224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.132 ms 00:25:53.266 [2024-12-06 06:52:05.760238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.266 [2024-12-06 06:52:05.784748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.266 [2024-12-06 06:52:05.784785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:53.266 [2024-12-06 06:52:05.784799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.430 ms 00:25:53.266 [2024-12-06 06:52:05.784807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.266 [2024-12-06 06:52:05.796519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.266 [2024-12-06 06:52:05.796549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:53.266 [2024-12-06 06:52:05.796563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.630 ms 00:25:53.266 [2024-12-06 06:52:05.796570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.266 [2024-12-06 06:52:05.807741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.266 [2024-12-06 06:52:05.807770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:53.266 [2024-12-06 06:52:05.807783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.103 ms 00:25:53.266 [2024-12-06 06:52:05.807792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.266 [2024-12-06 06:52:05.808411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.266 [2024-12-06 06:52:05.808429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:53.266 [2024-12-06 06:52:05.808440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.525 ms 00:25:53.266 [2024-12-06 06:52:05.808447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.266 [2024-12-06 06:52:05.864498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.266 [2024-12-06 06:52:05.864550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:53.266 [2024-12-06 06:52:05.864565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.009 ms 00:25:53.266 [2024-12-06 06:52:05.864573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.266 [2024-12-06 06:52:05.875103] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:25:53.266 [2024-12-06 06:52:05.889394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.266 [2024-12-06 06:52:05.889437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:53.266 [2024-12-06 06:52:05.889451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.725 ms 00:25:53.266 [2024-12-06 06:52:05.889476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.266 [2024-12-06 06:52:05.889560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.266 [2024-12-06 06:52:05.889572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:53.266 [2024-12-06 06:52:05.889583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:53.266 [2024-12-06 06:52:05.889593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.266 [2024-12-06 06:52:05.889643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.266 [2024-12-06 06:52:05.889653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:53.266 [2024-12-06 06:52:05.889661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:25:53.266 [2024-12-06 06:52:05.889672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.266 [2024-12-06 06:52:05.889695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.266 [2024-12-06 06:52:05.889704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:53.266 [2024-12-06 06:52:05.889712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:53.266 [2024-12-06 06:52:05.889723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.266 [2024-12-06 06:52:05.889754] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:53.266 [2024-12-06 06:52:05.889767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.266 [2024-12-06 06:52:05.889778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:53.266 [2024-12-06 06:52:05.889787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:53.266 [2024-12-06 06:52:05.889794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.266 [2024-12-06 06:52:05.913162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.266 [2024-12-06 06:52:05.913201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:53.266 [2024-12-06 06:52:05.913215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.342 ms 00:25:53.266 [2024-12-06 06:52:05.913223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.266 [2024-12-06 06:52:05.913312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.266 [2024-12-06 06:52:05.913323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:53.266 [2024-12-06 06:52:05.913333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:25:53.266 [2024-12-06 06:52:05.913342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.266 [2024-12-06 06:52:05.914274] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:53.266 [2024-12-06 06:52:05.917252] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 291.339 ms, result 0 00:25:53.266 [2024-12-06 06:52:05.918060] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:53.266 Some configs were skipped because the RPC state that can call them passed over. 00:25:53.266 06:52:05 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:25:53.525 [2024-12-06 06:52:06.152144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.525 [2024-12-06 06:52:06.152310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:25:53.525 [2024-12-06 06:52:06.152330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.141 ms 00:25:53.525 [2024-12-06 06:52:06.152340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.525 [2024-12-06 06:52:06.152378] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.380 ms, result 0 00:25:53.525 true 00:25:53.525 06:52:06 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:25:53.785 [2024-12-06 06:52:06.320092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.785 [2024-12-06 06:52:06.320137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:25:53.785 [2024-12-06 06:52:06.320151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.881 ms 00:25:53.785 [2024-12-06 06:52:06.320159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.785 [2024-12-06 06:52:06.320195] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 0.989 ms, result 0 00:25:53.785 true 00:25:53.785 06:52:06 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 76950 00:25:53.785 06:52:06 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76950 ']' 00:25:53.785 06:52:06 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76950 00:25:53.785 06:52:06 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:25:53.785 06:52:06 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:53.785 06:52:06 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76950 00:25:53.785 killing process with pid 76950 00:25:53.785 06:52:06 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:53.785 06:52:06 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:53.785 06:52:06 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76950' 00:25:53.785 06:52:06 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76950 00:25:53.785 06:52:06 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76950 00:25:54.356 [2024-12-06 06:52:07.070798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.356 [2024-12-06 06:52:07.070858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:54.356 [2024-12-06 06:52:07.070872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:54.356 [2024-12-06 06:52:07.070882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.356 [2024-12-06 06:52:07.070905] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:25:54.356 [2024-12-06 06:52:07.073504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.356 [2024-12-06 06:52:07.073535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:54.356 [2024-12-06 06:52:07.073550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.581 ms 00:25:54.356 [2024-12-06 06:52:07.073559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.356 [2024-12-06 06:52:07.073861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.356 [2024-12-06 06:52:07.073871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:54.356 [2024-12-06 06:52:07.073881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:25:54.356 [2024-12-06 06:52:07.073888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.356 [2024-12-06 06:52:07.077935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.356 [2024-12-06 06:52:07.077963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:54.356 [2024-12-06 06:52:07.077978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.026 ms 00:25:54.356 [2024-12-06 06:52:07.077985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.356 [2024-12-06 06:52:07.084921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.356 [2024-12-06 06:52:07.085049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:54.356 [2024-12-06 06:52:07.085072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.899 ms 00:25:54.356 [2024-12-06 06:52:07.085080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.639 [2024-12-06 06:52:07.095335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.639 [2024-12-06 06:52:07.095372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:54.639 [2024-12-06 06:52:07.095388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.198 ms 00:25:54.639 [2024-12-06 06:52:07.095413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.639 [2024-12-06 06:52:07.102627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.639 [2024-12-06 06:52:07.102664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:54.639 [2024-12-06 06:52:07.102677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.174 ms 00:25:54.639 [2024-12-06 06:52:07.102685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.639 [2024-12-06 06:52:07.102826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.639 [2024-12-06 06:52:07.102837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:54.639 [2024-12-06 06:52:07.102848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:25:54.639 [2024-12-06 06:52:07.102857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.639 [2024-12-06 06:52:07.113850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.639 [2024-12-06 06:52:07.113881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:54.639 [2024-12-06 06:52:07.113893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.970 ms 00:25:54.639 [2024-12-06 06:52:07.113901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.639 [2024-12-06 06:52:07.124168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.639 [2024-12-06 06:52:07.124200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:54.639 [2024-12-06 06:52:07.124217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.228 ms 00:25:54.639 [2024-12-06 06:52:07.124226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.639 [2024-12-06 06:52:07.133859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.639 [2024-12-06 06:52:07.133893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:54.639 [2024-12-06 06:52:07.133905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.593 ms 00:25:54.639 [2024-12-06 06:52:07.133913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.639 [2024-12-06 06:52:07.143484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.639 [2024-12-06 06:52:07.143514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:54.639 [2024-12-06 06:52:07.143526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.503 ms 00:25:54.639 [2024-12-06 06:52:07.143534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.639 [2024-12-06 06:52:07.143570] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:54.639 [2024-12-06 06:52:07.143587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:54.639 [2024-12-06 06:52:07.143600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:54.639 [2024-12-06 06:52:07.143609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:54.639 [2024-12-06 06:52:07.143620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:54.639 [2024-12-06 06:52:07.143629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:54.639 [2024-12-06 06:52:07.143641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:54.639 [2024-12-06 06:52:07.143650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:54.639 [2024-12-06 06:52:07.143660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:54.639 [2024-12-06 06:52:07.143669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:54.639 [2024-12-06 06:52:07.143680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:54.639 [2024-12-06 06:52:07.143688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:54.639 [2024-12-06 06:52:07.143698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:54.639 [2024-12-06 06:52:07.143708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:54.639 [2024-12-06 06:52:07.143718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:54.639 [2024-12-06 06:52:07.143726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:54.640 [2024-12-06 06:52:07.143739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:54.640 [2024-12-06 06:52:07.143748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:54.640 [2024-12-06 06:52:07.143758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:54.640 [2024-12-06 06:52:07.143766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:54.640 [2024-12-06 06:52:07.143776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:54.640 [2024-12-06 06:52:07.143785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:54.640 [2024-12-06 06:52:07.143796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:54.640 [2024-12-06 06:52:07.143805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:54.640 [2024-12-06 06:52:07.143815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:54.640 [2024-12-06 06:52:07.143823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:54.640 [2024-12-06 06:52:07.143833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:54.640 [2024-12-06 06:52:07.143843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:54.640 [2024-12-06 06:52:07.143853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:54.640 [2024-12-06 06:52:07.143862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:54.640 [2024-12-06 06:52:07.143873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:54.640 [2024-12-06 06:52:07.143883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:54.640 [2024-12-06 06:52:07.143893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:54.640 [2024-12-06 06:52:07.143902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:54.640 [2024-12-06 06:52:07.143912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:54.640 [2024-12-06 06:52:07.143920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:54.640 [2024-12-06 06:52:07.143930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:54.640 [2024-12-06 06:52:07.143939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:54.640 [2024-12-06 06:52:07.143951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:54.640 [2024-12-06 06:52:07.143960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:54.640 [2024-12-06 06:52:07.143969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:54.640 [2024-12-06 06:52:07.143978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:54.640 [2024-12-06 06:52:07.143990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:54.640 [2024-12-06 06:52:07.143999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:54.640 [2024-12-06 06:52:07.144009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:54.640 [2024-12-06 06:52:07.144017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:54.640 [2024-12-06 06:52:07.144027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:54.640 [2024-12-06 06:52:07.144035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:54.640 [2024-12-06 06:52:07.144046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:54.640 [2024-12-06 06:52:07.144055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:54.640 [2024-12-06 06:52:07.144064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:54.640 [2024-12-06 06:52:07.144073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:54.640 [2024-12-06 06:52:07.144083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:54.640 [2024-12-06 06:52:07.144092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:54.640 [2024-12-06 06:52:07.144103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:54.640 [2024-12-06 06:52:07.144112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:54.640 [2024-12-06 06:52:07.144123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:54.640 [2024-12-06 06:52:07.144131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:54.640 [2024-12-06 06:52:07.144141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:54.640 [2024-12-06 06:52:07.144150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:54.640 [2024-12-06 06:52:07.144160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:54.640 [2024-12-06 06:52:07.144168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:54.640 [2024-12-06 06:52:07.144179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:54.640 [2024-12-06 06:52:07.144187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:54.640 [2024-12-06 06:52:07.144198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:54.640 [2024-12-06 06:52:07.144206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:54.640 [2024-12-06 06:52:07.144216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:54.640 [2024-12-06 06:52:07.144225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:54.640 [2024-12-06 06:52:07.144235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:54.640 [2024-12-06 06:52:07.144244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:54.640 [2024-12-06 06:52:07.144257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:54.640 [2024-12-06 06:52:07.144266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:54.640 [2024-12-06 06:52:07.144275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:54.640 [2024-12-06 06:52:07.144284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:54.641 [2024-12-06 06:52:07.144294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:54.641 [2024-12-06 06:52:07.144303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:54.641 [2024-12-06 06:52:07.144312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:54.641 [2024-12-06 06:52:07.144321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:54.641 [2024-12-06 06:52:07.144331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:54.641 [2024-12-06 06:52:07.144340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:54.641 [2024-12-06 06:52:07.144350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:54.641 [2024-12-06 06:52:07.144358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:54.641 [2024-12-06 06:52:07.144368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:54.641 [2024-12-06 06:52:07.144377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:54.641 [2024-12-06 06:52:07.144388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:54.641 [2024-12-06 06:52:07.144397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:54.641 [2024-12-06 06:52:07.144408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:54.641 [2024-12-06 06:52:07.144417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:54.641 [2024-12-06 06:52:07.144427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:54.641 [2024-12-06 06:52:07.144436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:54.641 [2024-12-06 06:52:07.144444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:54.641 [2024-12-06 06:52:07.144451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:54.641 [2024-12-06 06:52:07.144474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:54.641 [2024-12-06 06:52:07.144483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:54.641 [2024-12-06 06:52:07.144493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:54.641 [2024-12-06 06:52:07.144501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:54.641 [2024-12-06 06:52:07.144512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:54.641 [2024-12-06 06:52:07.144519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:54.641 [2024-12-06 06:52:07.144528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:54.641 [2024-12-06 06:52:07.144535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:54.641 [2024-12-06 06:52:07.144544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:54.641 [2024-12-06 06:52:07.144567] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:54.641 [2024-12-06 06:52:07.144580] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5ee066fa-e7bc-4a33-b1a8-f35f9ed69a0f 00:25:54.641 [2024-12-06 06:52:07.144590] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:54.641 [2024-12-06 06:52:07.144599] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:54.641 [2024-12-06 06:52:07.144606] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:54.641 [2024-12-06 06:52:07.144616] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:54.641 [2024-12-06 06:52:07.144624] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:54.641 [2024-12-06 06:52:07.144633] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:54.641 [2024-12-06 06:52:07.144640] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:54.641 [2024-12-06 06:52:07.144648] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:54.641 [2024-12-06 06:52:07.144654] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:54.641 [2024-12-06 06:52:07.144662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.641 [2024-12-06 06:52:07.144670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:54.641 [2024-12-06 06:52:07.144679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.094 ms 00:25:54.641 [2024-12-06 06:52:07.144686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.641 [2024-12-06 06:52:07.157034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.641 [2024-12-06 06:52:07.157066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:54.641 [2024-12-06 06:52:07.157080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.314 ms 00:25:54.641 [2024-12-06 06:52:07.157088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.641 [2024-12-06 06:52:07.157450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.641 [2024-12-06 06:52:07.157478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:54.641 [2024-12-06 06:52:07.157492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:25:54.641 [2024-12-06 06:52:07.157499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.642 [2024-12-06 06:52:07.201003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:54.642 [2024-12-06 06:52:07.201047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:54.642 [2024-12-06 06:52:07.201059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:54.642 [2024-12-06 06:52:07.201068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.642 [2024-12-06 06:52:07.201188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:54.642 [2024-12-06 06:52:07.201198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:54.642 [2024-12-06 06:52:07.201210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:54.642 [2024-12-06 06:52:07.201218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.642 [2024-12-06 06:52:07.201263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:54.642 [2024-12-06 06:52:07.201272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:54.642 [2024-12-06 06:52:07.201284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:54.642 [2024-12-06 06:52:07.201291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.642 [2024-12-06 06:52:07.201310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:54.642 [2024-12-06 06:52:07.201318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:54.642 [2024-12-06 06:52:07.201327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:54.642 [2024-12-06 06:52:07.201336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.642 [2024-12-06 06:52:07.277163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:54.642 [2024-12-06 06:52:07.277324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:54.642 [2024-12-06 06:52:07.277345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:54.642 [2024-12-06 06:52:07.277353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.642 [2024-12-06 06:52:07.340776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:54.642 [2024-12-06 06:52:07.340824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:54.642 [2024-12-06 06:52:07.340838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:54.642 [2024-12-06 06:52:07.340848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.642 [2024-12-06 06:52:07.340935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:54.642 [2024-12-06 06:52:07.340944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:54.642 [2024-12-06 06:52:07.340956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:54.642 [2024-12-06 06:52:07.340964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.642 [2024-12-06 06:52:07.340994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:54.642 [2024-12-06 06:52:07.341003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:54.642 [2024-12-06 06:52:07.341012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:54.642 [2024-12-06 06:52:07.341019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.642 [2024-12-06 06:52:07.341110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:54.642 [2024-12-06 06:52:07.341120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:54.642 [2024-12-06 06:52:07.341129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:54.642 [2024-12-06 06:52:07.341136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.642 [2024-12-06 06:52:07.341168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:54.642 [2024-12-06 06:52:07.341177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:54.642 [2024-12-06 06:52:07.341185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:54.642 [2024-12-06 06:52:07.341193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.642 [2024-12-06 06:52:07.341230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:54.642 [2024-12-06 06:52:07.341238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:54.642 [2024-12-06 06:52:07.341249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:54.642 [2024-12-06 06:52:07.341257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.642 [2024-12-06 06:52:07.341298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:54.642 [2024-12-06 06:52:07.341307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:54.642 [2024-12-06 06:52:07.341317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:54.642 [2024-12-06 06:52:07.341324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.642 [2024-12-06 06:52:07.341450] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 270.632 ms, result 0 00:25:55.296 06:52:07 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:25:55.297 06:52:07 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:55.557 [2024-12-06 06:52:08.059533] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:25:55.557 [2024-12-06 06:52:08.059813] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77004 ] 00:25:55.557 [2024-12-06 06:52:08.217615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:55.817 [2024-12-06 06:52:08.319681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:56.077 [2024-12-06 06:52:08.576272] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:56.077 [2024-12-06 06:52:08.576338] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:56.077 [2024-12-06 06:52:08.730128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.077 [2024-12-06 06:52:08.730330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:56.077 [2024-12-06 06:52:08.730351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:56.077 [2024-12-06 06:52:08.730361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.077 [2024-12-06 06:52:08.733052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.077 [2024-12-06 06:52:08.733091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:56.077 [2024-12-06 06:52:08.733101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.668 ms 00:25:56.077 [2024-12-06 06:52:08.733109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.077 [2024-12-06 06:52:08.733195] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:56.077 [2024-12-06 06:52:08.733972] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:56.077 [2024-12-06 06:52:08.734074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.077 [2024-12-06 06:52:08.734129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:56.077 [2024-12-06 06:52:08.734188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.887 ms 00:25:56.077 [2024-12-06 06:52:08.734211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.077 [2024-12-06 06:52:08.735421] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:56.077 [2024-12-06 06:52:08.747730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.077 [2024-12-06 06:52:08.747856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:56.077 [2024-12-06 06:52:08.747872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.311 ms 00:25:56.077 [2024-12-06 06:52:08.747880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.077 [2024-12-06 06:52:08.747969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.077 [2024-12-06 06:52:08.747981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:56.077 [2024-12-06 06:52:08.747989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:25:56.077 [2024-12-06 06:52:08.747997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.077 [2024-12-06 06:52:08.753062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.077 [2024-12-06 06:52:08.753176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:56.077 [2024-12-06 06:52:08.753190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.024 ms 00:25:56.077 [2024-12-06 06:52:08.753197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.077 [2024-12-06 06:52:08.753285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.077 [2024-12-06 06:52:08.753295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:56.077 [2024-12-06 06:52:08.753303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:25:56.077 [2024-12-06 06:52:08.753310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.077 [2024-12-06 06:52:08.753337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.077 [2024-12-06 06:52:08.753345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:56.077 [2024-12-06 06:52:08.753353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:56.077 [2024-12-06 06:52:08.753360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.077 [2024-12-06 06:52:08.753380] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:25:56.077 [2024-12-06 06:52:08.756696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.077 [2024-12-06 06:52:08.756724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:56.077 [2024-12-06 06:52:08.756733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.322 ms 00:25:56.077 [2024-12-06 06:52:08.756740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.077 [2024-12-06 06:52:08.756778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.077 [2024-12-06 06:52:08.756786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:56.077 [2024-12-06 06:52:08.756794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:56.077 [2024-12-06 06:52:08.756801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.077 [2024-12-06 06:52:08.756821] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:56.077 [2024-12-06 06:52:08.756839] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:56.077 [2024-12-06 06:52:08.756873] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:56.077 [2024-12-06 06:52:08.756888] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:56.077 [2024-12-06 06:52:08.756991] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:56.077 [2024-12-06 06:52:08.757001] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:56.077 [2024-12-06 06:52:08.757011] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:56.077 [2024-12-06 06:52:08.757024] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:56.077 [2024-12-06 06:52:08.757033] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:56.077 [2024-12-06 06:52:08.757041] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:25:56.077 [2024-12-06 06:52:08.757049] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:56.077 [2024-12-06 06:52:08.757056] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:56.077 [2024-12-06 06:52:08.757062] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:56.077 [2024-12-06 06:52:08.757070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.077 [2024-12-06 06:52:08.757077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:56.077 [2024-12-06 06:52:08.757084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:25:56.077 [2024-12-06 06:52:08.757092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.077 [2024-12-06 06:52:08.757179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.077 [2024-12-06 06:52:08.757189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:56.077 [2024-12-06 06:52:08.757196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:25:56.077 [2024-12-06 06:52:08.757203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.077 [2024-12-06 06:52:08.757318] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:56.077 [2024-12-06 06:52:08.757329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:56.077 [2024-12-06 06:52:08.757336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:56.077 [2024-12-06 06:52:08.757344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:56.077 [2024-12-06 06:52:08.757352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:56.077 [2024-12-06 06:52:08.757359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:56.077 [2024-12-06 06:52:08.757365] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:25:56.077 [2024-12-06 06:52:08.757372] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:56.077 [2024-12-06 06:52:08.757379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:25:56.077 [2024-12-06 06:52:08.757385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:56.077 [2024-12-06 06:52:08.757392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:56.077 [2024-12-06 06:52:08.757405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:25:56.077 [2024-12-06 06:52:08.757411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:56.077 [2024-12-06 06:52:08.757417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:56.077 [2024-12-06 06:52:08.757424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:25:56.077 [2024-12-06 06:52:08.757430] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:56.077 [2024-12-06 06:52:08.757436] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:56.077 [2024-12-06 06:52:08.757444] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:25:56.077 [2024-12-06 06:52:08.757450] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:56.077 [2024-12-06 06:52:08.757456] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:56.077 [2024-12-06 06:52:08.757487] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:25:56.077 [2024-12-06 06:52:08.757494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:56.077 [2024-12-06 06:52:08.757502] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:56.078 [2024-12-06 06:52:08.757509] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:25:56.078 [2024-12-06 06:52:08.757516] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:56.078 [2024-12-06 06:52:08.757522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:56.078 [2024-12-06 06:52:08.757529] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:25:56.078 [2024-12-06 06:52:08.757535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:56.078 [2024-12-06 06:52:08.757542] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:56.078 [2024-12-06 06:52:08.757549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:25:56.078 [2024-12-06 06:52:08.757555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:56.078 [2024-12-06 06:52:08.757562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:56.078 [2024-12-06 06:52:08.757569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:25:56.078 [2024-12-06 06:52:08.757575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:56.078 [2024-12-06 06:52:08.757582] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:56.078 [2024-12-06 06:52:08.757588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:25:56.078 [2024-12-06 06:52:08.757595] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:56.078 [2024-12-06 06:52:08.757601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:56.078 [2024-12-06 06:52:08.757607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:25:56.078 [2024-12-06 06:52:08.757613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:56.078 [2024-12-06 06:52:08.757620] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:56.078 [2024-12-06 06:52:08.757626] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:25:56.078 [2024-12-06 06:52:08.757633] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:56.078 [2024-12-06 06:52:08.757640] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:56.078 [2024-12-06 06:52:08.757648] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:56.078 [2024-12-06 06:52:08.757661] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:56.078 [2024-12-06 06:52:08.757667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:56.078 [2024-12-06 06:52:08.757675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:56.078 [2024-12-06 06:52:08.757682] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:56.078 [2024-12-06 06:52:08.757690] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:56.078 [2024-12-06 06:52:08.757697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:56.078 [2024-12-06 06:52:08.757704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:56.078 [2024-12-06 06:52:08.757710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:56.078 [2024-12-06 06:52:08.757718] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:56.078 [2024-12-06 06:52:08.757726] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:56.078 [2024-12-06 06:52:08.757734] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:25:56.078 [2024-12-06 06:52:08.757741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:25:56.078 [2024-12-06 06:52:08.757748] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:25:56.078 [2024-12-06 06:52:08.757755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:25:56.078 [2024-12-06 06:52:08.757762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:25:56.078 [2024-12-06 06:52:08.757769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:25:56.078 [2024-12-06 06:52:08.757776] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:25:56.078 [2024-12-06 06:52:08.757783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:25:56.078 [2024-12-06 06:52:08.757790] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:25:56.078 [2024-12-06 06:52:08.757799] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:25:56.078 [2024-12-06 06:52:08.757806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:25:56.078 [2024-12-06 06:52:08.757812] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:25:56.078 [2024-12-06 06:52:08.757819] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:25:56.078 [2024-12-06 06:52:08.757826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:25:56.078 [2024-12-06 06:52:08.757833] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:56.078 [2024-12-06 06:52:08.757841] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:56.078 [2024-12-06 06:52:08.757849] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:56.078 [2024-12-06 06:52:08.757856] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:56.078 [2024-12-06 06:52:08.757863] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:56.078 [2024-12-06 06:52:08.757870] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:56.078 [2024-12-06 06:52:08.757878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.078 [2024-12-06 06:52:08.757887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:56.078 [2024-12-06 06:52:08.757894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.628 ms 00:25:56.078 [2024-12-06 06:52:08.757901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.078 [2024-12-06 06:52:08.783862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.078 [2024-12-06 06:52:08.783979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:56.078 [2024-12-06 06:52:08.784042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.891 ms 00:25:56.078 [2024-12-06 06:52:08.784066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.078 [2024-12-06 06:52:08.784205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.078 [2024-12-06 06:52:08.784324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:56.078 [2024-12-06 06:52:08.784385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:25:56.078 [2024-12-06 06:52:08.784404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.339 [2024-12-06 06:52:08.833595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.339 [2024-12-06 06:52:08.833726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:56.339 [2024-12-06 06:52:08.833790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.157 ms 00:25:56.339 [2024-12-06 06:52:08.833813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.339 [2024-12-06 06:52:08.833941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.339 [2024-12-06 06:52:08.833973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:56.339 [2024-12-06 06:52:08.833994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:56.339 [2024-12-06 06:52:08.834042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.339 [2024-12-06 06:52:08.834364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.339 [2024-12-06 06:52:08.834399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:56.339 [2024-12-06 06:52:08.834546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:25:56.339 [2024-12-06 06:52:08.834577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.339 [2024-12-06 06:52:08.834717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.339 [2024-12-06 06:52:08.834740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:56.339 [2024-12-06 06:52:08.834791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:25:56.339 [2024-12-06 06:52:08.834812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.339 [2024-12-06 06:52:08.847987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.339 [2024-12-06 06:52:08.848128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:56.339 [2024-12-06 06:52:08.848744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.104 ms 00:25:56.339 [2024-12-06 06:52:08.848825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.339 [2024-12-06 06:52:08.861024] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:25:56.339 [2024-12-06 06:52:08.861149] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:56.339 [2024-12-06 06:52:08.861235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.339 [2024-12-06 06:52:08.861289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:56.339 [2024-12-06 06:52:08.861314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.279 ms 00:25:56.339 [2024-12-06 06:52:08.861332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.339 [2024-12-06 06:52:08.885302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.339 [2024-12-06 06:52:08.885408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:56.339 [2024-12-06 06:52:08.885470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.894 ms 00:25:56.339 [2024-12-06 06:52:08.885494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.339 [2024-12-06 06:52:08.896627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.339 [2024-12-06 06:52:08.896723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:56.339 [2024-12-06 06:52:08.896768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.060 ms 00:25:56.339 [2024-12-06 06:52:08.896789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.339 [2024-12-06 06:52:08.908231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.339 [2024-12-06 06:52:08.908331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:56.339 [2024-12-06 06:52:08.908376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.374 ms 00:25:56.339 [2024-12-06 06:52:08.908397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.339 [2024-12-06 06:52:08.909022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.339 [2024-12-06 06:52:08.909097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:56.339 [2024-12-06 06:52:08.909141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.520 ms 00:25:56.339 [2024-12-06 06:52:08.909162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.339 [2024-12-06 06:52:08.964437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.339 [2024-12-06 06:52:08.964611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:56.339 [2024-12-06 06:52:08.964663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.238 ms 00:25:56.339 [2024-12-06 06:52:08.964686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.340 [2024-12-06 06:52:08.975655] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:25:56.340 [2024-12-06 06:52:08.990061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.340 [2024-12-06 06:52:08.990211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:56.340 [2024-12-06 06:52:08.990263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.978 ms 00:25:56.340 [2024-12-06 06:52:08.990292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.340 [2024-12-06 06:52:08.990399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.340 [2024-12-06 06:52:08.990426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:56.340 [2024-12-06 06:52:08.990446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:56.340 [2024-12-06 06:52:08.990489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.340 [2024-12-06 06:52:08.990606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.340 [2024-12-06 06:52:08.990633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:56.340 [2024-12-06 06:52:08.990656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:25:56.340 [2024-12-06 06:52:08.990680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.340 [2024-12-06 06:52:08.990723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.340 [2024-12-06 06:52:08.990744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:56.340 [2024-12-06 06:52:08.990800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:56.340 [2024-12-06 06:52:08.990823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.340 [2024-12-06 06:52:08.990870] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:56.340 [2024-12-06 06:52:08.991210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.340 [2024-12-06 06:52:08.991307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:56.340 [2024-12-06 06:52:08.991334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.341 ms 00:25:56.340 [2024-12-06 06:52:08.991353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.274 [2024-12-06 06:52:09.781815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.274 [2024-12-06 06:52:09.781994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:57.274 [2024-12-06 06:52:09.782050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 790.419 ms 00:25:57.274 [2024-12-06 06:52:09.782083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.274 [2024-12-06 06:52:09.782506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.274 [2024-12-06 06:52:09.782569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:57.274 [2024-12-06 06:52:09.782581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:25:57.274 [2024-12-06 06:52:09.782591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.274 [2024-12-06 06:52:09.784202] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:57.274 [2024-12-06 06:52:09.787421] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 1053.798 ms, result 0 00:25:57.274 [2024-12-06 06:52:09.788568] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:57.274 [2024-12-06 06:52:09.801429] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:58.208  [2024-12-06T06:52:11.884Z] Copying: 20/256 [MB] (20 MBps) [2024-12-06T06:52:12.826Z] Copying: 37/256 [MB] (17 MBps) [2024-12-06T06:52:14.216Z] Copying: 53/256 [MB] (15 MBps) [2024-12-06T06:52:15.157Z] Copying: 79/256 [MB] (26 MBps) [2024-12-06T06:52:16.097Z] Copying: 102/256 [MB] (23 MBps) [2024-12-06T06:52:17.037Z] Copying: 126/256 [MB] (24 MBps) [2024-12-06T06:52:17.979Z] Copying: 145/256 [MB] (19 MBps) [2024-12-06T06:52:18.929Z] Copying: 160/256 [MB] (14 MBps) [2024-12-06T06:52:19.873Z] Copying: 172/256 [MB] (11 MBps) [2024-12-06T06:52:20.817Z] Copying: 185592/262144 [kB] (9088 kBps) [2024-12-06T06:52:22.194Z] Copying: 195324/262144 [kB] (9732 kBps) [2024-12-06T06:52:23.135Z] Copying: 204596/262144 [kB] (9272 kBps) [2024-12-06T06:52:24.069Z] Copying: 210/256 [MB] (10 MBps) [2024-12-06T06:52:24.996Z] Copying: 222/256 [MB] (11 MBps) [2024-12-06T06:52:25.994Z] Copying: 232/256 [MB] (10 MBps) [2024-12-06T06:52:26.924Z] Copying: 243/256 [MB] (10 MBps) [2024-12-06T06:52:27.181Z] Copying: 259080/262144 [kB] (10192 kBps) [2024-12-06T06:52:27.181Z] Copying: 256/256 [MB] (average 14 MBps)[2024-12-06 06:52:27.080782] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:14.440 [2024-12-06 06:52:27.089892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.440 [2024-12-06 06:52:27.090020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:14.440 [2024-12-06 06:52:27.090086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:14.440 [2024-12-06 06:52:27.090110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.440 [2024-12-06 06:52:27.090136] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:26:14.440 [2024-12-06 06:52:27.092716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.440 [2024-12-06 06:52:27.092745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:14.440 [2024-12-06 06:52:27.092757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.566 ms 00:26:14.440 [2024-12-06 06:52:27.092764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.440 [2024-12-06 06:52:27.093021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.440 [2024-12-06 06:52:27.093031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:14.440 [2024-12-06 06:52:27.093039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.237 ms 00:26:14.440 [2024-12-06 06:52:27.093046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.440 [2024-12-06 06:52:27.096821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.440 [2024-12-06 06:52:27.096902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:14.440 [2024-12-06 06:52:27.096948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.756 ms 00:26:14.440 [2024-12-06 06:52:27.096970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.440 [2024-12-06 06:52:27.104103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.440 [2024-12-06 06:52:27.104198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:14.440 [2024-12-06 06:52:27.104246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.102 ms 00:26:14.440 [2024-12-06 06:52:27.104267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.440 [2024-12-06 06:52:27.128485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.440 [2024-12-06 06:52:27.128600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:14.440 [2024-12-06 06:52:27.128650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.149 ms 00:26:14.440 [2024-12-06 06:52:27.128672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.440 [2024-12-06 06:52:27.142304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.440 [2024-12-06 06:52:27.142408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:14.440 [2024-12-06 06:52:27.142475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.544 ms 00:26:14.440 [2024-12-06 06:52:27.142499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.440 [2024-12-06 06:52:27.142674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.440 [2024-12-06 06:52:27.142702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:14.440 [2024-12-06 06:52:27.142813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:26:14.440 [2024-12-06 06:52:27.142832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.440 [2024-12-06 06:52:27.166530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.440 [2024-12-06 06:52:27.166635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:14.440 [2024-12-06 06:52:27.166648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.642 ms 00:26:14.440 [2024-12-06 06:52:27.166656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.698 [2024-12-06 06:52:27.190180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.698 [2024-12-06 06:52:27.190295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:14.698 [2024-12-06 06:52:27.190310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.495 ms 00:26:14.698 [2024-12-06 06:52:27.190317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.698 [2024-12-06 06:52:27.213137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.698 [2024-12-06 06:52:27.213242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:14.698 [2024-12-06 06:52:27.213258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.789 ms 00:26:14.698 [2024-12-06 06:52:27.213265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.698 [2024-12-06 06:52:27.235414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.698 [2024-12-06 06:52:27.235447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:14.698 [2024-12-06 06:52:27.235457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.091 ms 00:26:14.698 [2024-12-06 06:52:27.235476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.698 [2024-12-06 06:52:27.235510] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:14.698 [2024-12-06 06:52:27.235524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:14.698 [2024-12-06 06:52:27.235542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:14.698 [2024-12-06 06:52:27.235551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:14.698 [2024-12-06 06:52:27.235558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.235565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.235573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.235581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.235588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.235596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.235604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.235611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.235619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.235626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.235633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.235641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.235649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.235656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.235664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.235671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.235678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.235685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.235693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.235700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.235707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.235714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.235722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.235729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.235736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.235743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.235751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.235760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.235768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.235776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.235783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.235791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.235798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.235806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.235813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.235821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.235828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.235835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.235843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.235851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.235858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.235865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.235872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.235880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.235887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.235894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.235901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.235908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.235915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.235922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.235930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.235938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.235945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.235952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.235960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.235967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.235974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.235982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.235989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.235997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.236005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.236012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.236020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.236027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.236034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.236041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.236049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.236056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.236064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.236071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.236078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.236086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.236094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.236101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.236108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.236116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.236124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.236131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.236138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.236145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.236152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.236159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.236166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.236173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.236180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.236187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.236195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.236202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:14.699 [2024-12-06 06:52:27.236209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:14.700 [2024-12-06 06:52:27.236217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:14.700 [2024-12-06 06:52:27.236232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:14.700 [2024-12-06 06:52:27.236240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:14.700 [2024-12-06 06:52:27.236248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:14.700 [2024-12-06 06:52:27.236255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:14.700 [2024-12-06 06:52:27.236263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:14.700 [2024-12-06 06:52:27.236270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:14.700 [2024-12-06 06:52:27.236278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:14.700 [2024-12-06 06:52:27.236294] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:14.700 [2024-12-06 06:52:27.236302] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5ee066fa-e7bc-4a33-b1a8-f35f9ed69a0f 00:26:14.700 [2024-12-06 06:52:27.236310] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:14.700 [2024-12-06 06:52:27.236317] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:14.700 [2024-12-06 06:52:27.236324] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:14.700 [2024-12-06 06:52:27.236331] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:14.700 [2024-12-06 06:52:27.236338] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:14.700 [2024-12-06 06:52:27.236346] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:14.700 [2024-12-06 06:52:27.236355] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:14.700 [2024-12-06 06:52:27.236361] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:14.700 [2024-12-06 06:52:27.236368] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:14.700 [2024-12-06 06:52:27.236374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.700 [2024-12-06 06:52:27.236382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:14.700 [2024-12-06 06:52:27.236390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.866 ms 00:26:14.700 [2024-12-06 06:52:27.236397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.700 [2024-12-06 06:52:27.248476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.700 [2024-12-06 06:52:27.248507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:14.700 [2024-12-06 06:52:27.248518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.062 ms 00:26:14.700 [2024-12-06 06:52:27.248526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.700 [2024-12-06 06:52:27.248877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.700 [2024-12-06 06:52:27.248896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:14.700 [2024-12-06 06:52:27.248904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:26:14.700 [2024-12-06 06:52:27.248912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.700 [2024-12-06 06:52:27.283583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:14.700 [2024-12-06 06:52:27.283631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:14.700 [2024-12-06 06:52:27.283644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:14.700 [2024-12-06 06:52:27.283657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.700 [2024-12-06 06:52:27.283770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:14.700 [2024-12-06 06:52:27.283781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:14.700 [2024-12-06 06:52:27.283790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:14.700 [2024-12-06 06:52:27.283798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.700 [2024-12-06 06:52:27.283848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:14.700 [2024-12-06 06:52:27.283859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:14.700 [2024-12-06 06:52:27.283867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:14.700 [2024-12-06 06:52:27.283876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.700 [2024-12-06 06:52:27.283898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:14.700 [2024-12-06 06:52:27.283907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:14.700 [2024-12-06 06:52:27.283916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:14.700 [2024-12-06 06:52:27.283924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.700 [2024-12-06 06:52:27.359156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:14.700 [2024-12-06 06:52:27.359331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:14.700 [2024-12-06 06:52:27.359347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:14.700 [2024-12-06 06:52:27.359355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.700 [2024-12-06 06:52:27.421139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:14.700 [2024-12-06 06:52:27.421186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:14.700 [2024-12-06 06:52:27.421199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:14.700 [2024-12-06 06:52:27.421207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.700 [2024-12-06 06:52:27.421279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:14.700 [2024-12-06 06:52:27.421288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:14.700 [2024-12-06 06:52:27.421296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:14.700 [2024-12-06 06:52:27.421303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.700 [2024-12-06 06:52:27.421331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:14.700 [2024-12-06 06:52:27.421341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:14.700 [2024-12-06 06:52:27.421349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:14.700 [2024-12-06 06:52:27.421356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.700 [2024-12-06 06:52:27.421440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:14.700 [2024-12-06 06:52:27.421449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:14.700 [2024-12-06 06:52:27.421457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:14.700 [2024-12-06 06:52:27.421487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.700 [2024-12-06 06:52:27.421517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:14.700 [2024-12-06 06:52:27.421526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:14.700 [2024-12-06 06:52:27.421536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:14.700 [2024-12-06 06:52:27.421543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.700 [2024-12-06 06:52:27.421578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:14.700 [2024-12-06 06:52:27.421587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:14.700 [2024-12-06 06:52:27.421594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:14.700 [2024-12-06 06:52:27.421602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.700 [2024-12-06 06:52:27.421643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:14.700 [2024-12-06 06:52:27.421655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:14.700 [2024-12-06 06:52:27.421663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:14.700 [2024-12-06 06:52:27.421671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.700 [2024-12-06 06:52:27.421797] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 331.900 ms, result 0 00:26:15.635 00:26:15.636 00:26:15.636 06:52:28 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:26:15.636 06:52:28 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:26:16.201 06:52:28 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:16.201 [2024-12-06 06:52:28.729576] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:26:16.201 [2024-12-06 06:52:28.729701] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77219 ] 00:26:16.202 [2024-12-06 06:52:28.888782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:16.459 [2024-12-06 06:52:28.991663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:16.718 [2024-12-06 06:52:29.253192] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:16.719 [2024-12-06 06:52:29.253255] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:16.719 [2024-12-06 06:52:29.412780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.719 [2024-12-06 06:52:29.412982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:16.719 [2024-12-06 06:52:29.413002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:16.719 [2024-12-06 06:52:29.413011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.719 [2024-12-06 06:52:29.416027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.719 [2024-12-06 06:52:29.416224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:16.719 [2024-12-06 06:52:29.416245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.993 ms 00:26:16.719 [2024-12-06 06:52:29.416253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.719 [2024-12-06 06:52:29.416428] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:16.719 [2024-12-06 06:52:29.417137] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:16.719 [2024-12-06 06:52:29.417165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.719 [2024-12-06 06:52:29.417173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:16.719 [2024-12-06 06:52:29.417182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.749 ms 00:26:16.719 [2024-12-06 06:52:29.417190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.719 [2024-12-06 06:52:29.418339] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:16.719 [2024-12-06 06:52:29.431456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.719 [2024-12-06 06:52:29.431502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:16.719 [2024-12-06 06:52:29.431514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.118 ms 00:26:16.719 [2024-12-06 06:52:29.431524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.719 [2024-12-06 06:52:29.431620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.719 [2024-12-06 06:52:29.431631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:16.719 [2024-12-06 06:52:29.431640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:26:16.719 [2024-12-06 06:52:29.431647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.719 [2024-12-06 06:52:29.436824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.719 [2024-12-06 06:52:29.436857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:16.719 [2024-12-06 06:52:29.436868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.136 ms 00:26:16.719 [2024-12-06 06:52:29.436876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.719 [2024-12-06 06:52:29.436965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.719 [2024-12-06 06:52:29.436974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:16.719 [2024-12-06 06:52:29.436982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:26:16.719 [2024-12-06 06:52:29.436990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.719 [2024-12-06 06:52:29.437018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.719 [2024-12-06 06:52:29.437026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:16.719 [2024-12-06 06:52:29.437034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:16.719 [2024-12-06 06:52:29.437041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.719 [2024-12-06 06:52:29.437061] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:26:16.719 [2024-12-06 06:52:29.440387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.719 [2024-12-06 06:52:29.440415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:16.719 [2024-12-06 06:52:29.440424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.330 ms 00:26:16.719 [2024-12-06 06:52:29.440432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.719 [2024-12-06 06:52:29.440485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.719 [2024-12-06 06:52:29.440495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:16.719 [2024-12-06 06:52:29.440503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:26:16.719 [2024-12-06 06:52:29.440510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.719 [2024-12-06 06:52:29.440531] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:16.719 [2024-12-06 06:52:29.440549] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:16.719 [2024-12-06 06:52:29.440582] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:16.719 [2024-12-06 06:52:29.440597] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:16.719 [2024-12-06 06:52:29.440699] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:16.719 [2024-12-06 06:52:29.440709] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:16.719 [2024-12-06 06:52:29.440719] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:16.719 [2024-12-06 06:52:29.440731] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:16.719 [2024-12-06 06:52:29.440740] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:16.719 [2024-12-06 06:52:29.440748] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:26:16.719 [2024-12-06 06:52:29.440755] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:16.719 [2024-12-06 06:52:29.440762] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:16.719 [2024-12-06 06:52:29.440769] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:16.719 [2024-12-06 06:52:29.440777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.719 [2024-12-06 06:52:29.440784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:16.719 [2024-12-06 06:52:29.440792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:26:16.719 [2024-12-06 06:52:29.440799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.719 [2024-12-06 06:52:29.440886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.719 [2024-12-06 06:52:29.440897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:16.719 [2024-12-06 06:52:29.440904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:26:16.719 [2024-12-06 06:52:29.440911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.719 [2024-12-06 06:52:29.441024] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:16.719 [2024-12-06 06:52:29.441034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:16.719 [2024-12-06 06:52:29.441042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:16.719 [2024-12-06 06:52:29.441050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:16.719 [2024-12-06 06:52:29.441057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:16.719 [2024-12-06 06:52:29.441063] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:16.719 [2024-12-06 06:52:29.441070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:26:16.719 [2024-12-06 06:52:29.441078] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:16.719 [2024-12-06 06:52:29.441084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:26:16.719 [2024-12-06 06:52:29.441091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:16.719 [2024-12-06 06:52:29.441098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:16.720 [2024-12-06 06:52:29.441113] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:26:16.720 [2024-12-06 06:52:29.441119] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:16.720 [2024-12-06 06:52:29.441126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:16.720 [2024-12-06 06:52:29.441132] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:26:16.720 [2024-12-06 06:52:29.441139] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:16.720 [2024-12-06 06:52:29.441145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:16.720 [2024-12-06 06:52:29.441152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:26:16.720 [2024-12-06 06:52:29.441158] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:16.720 [2024-12-06 06:52:29.441165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:16.720 [2024-12-06 06:52:29.441171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:26:16.720 [2024-12-06 06:52:29.441177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:16.720 [2024-12-06 06:52:29.441184] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:16.720 [2024-12-06 06:52:29.441190] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:26:16.720 [2024-12-06 06:52:29.441197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:16.720 [2024-12-06 06:52:29.441203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:16.720 [2024-12-06 06:52:29.441210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:26:16.720 [2024-12-06 06:52:29.441216] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:16.720 [2024-12-06 06:52:29.441223] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:16.720 [2024-12-06 06:52:29.441229] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:26:16.720 [2024-12-06 06:52:29.441235] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:16.720 [2024-12-06 06:52:29.441242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:16.720 [2024-12-06 06:52:29.441248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:26:16.720 [2024-12-06 06:52:29.441255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:16.720 [2024-12-06 06:52:29.441261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:16.720 [2024-12-06 06:52:29.441267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:26:16.720 [2024-12-06 06:52:29.441274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:16.720 [2024-12-06 06:52:29.441280] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:16.720 [2024-12-06 06:52:29.441286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:26:16.720 [2024-12-06 06:52:29.441292] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:16.720 [2024-12-06 06:52:29.441299] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:16.720 [2024-12-06 06:52:29.441305] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:26:16.720 [2024-12-06 06:52:29.441312] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:16.720 [2024-12-06 06:52:29.441319] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:16.720 [2024-12-06 06:52:29.441327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:16.720 [2024-12-06 06:52:29.441336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:16.720 [2024-12-06 06:52:29.441343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:16.720 [2024-12-06 06:52:29.441350] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:16.720 [2024-12-06 06:52:29.441357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:16.720 [2024-12-06 06:52:29.441364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:16.720 [2024-12-06 06:52:29.441370] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:16.720 [2024-12-06 06:52:29.441377] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:16.720 [2024-12-06 06:52:29.441383] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:16.720 [2024-12-06 06:52:29.441391] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:16.720 [2024-12-06 06:52:29.441399] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:16.720 [2024-12-06 06:52:29.441407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:26:16.720 [2024-12-06 06:52:29.441414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:26:16.720 [2024-12-06 06:52:29.441421] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:26:16.720 [2024-12-06 06:52:29.441429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:26:16.720 [2024-12-06 06:52:29.441435] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:26:16.720 [2024-12-06 06:52:29.441442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:26:16.720 [2024-12-06 06:52:29.441449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:26:16.720 [2024-12-06 06:52:29.441455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:26:16.720 [2024-12-06 06:52:29.441473] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:26:16.720 [2024-12-06 06:52:29.441481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:26:16.720 [2024-12-06 06:52:29.441487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:26:16.720 [2024-12-06 06:52:29.441494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:26:16.720 [2024-12-06 06:52:29.441501] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:26:16.720 [2024-12-06 06:52:29.441508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:26:16.720 [2024-12-06 06:52:29.441515] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:16.720 [2024-12-06 06:52:29.441523] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:16.720 [2024-12-06 06:52:29.441531] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:16.720 [2024-12-06 06:52:29.441539] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:16.720 [2024-12-06 06:52:29.441547] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:16.720 [2024-12-06 06:52:29.441553] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:16.720 [2024-12-06 06:52:29.441562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.720 [2024-12-06 06:52:29.441572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:16.720 [2024-12-06 06:52:29.441579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.608 ms 00:26:16.720 [2024-12-06 06:52:29.441586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.979 [2024-12-06 06:52:29.467958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.979 [2024-12-06 06:52:29.468000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:16.979 [2024-12-06 06:52:29.468011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.301 ms 00:26:16.979 [2024-12-06 06:52:29.468019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.979 [2024-12-06 06:52:29.468147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.979 [2024-12-06 06:52:29.468157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:16.979 [2024-12-06 06:52:29.468166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:26:16.979 [2024-12-06 06:52:29.468173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.979 [2024-12-06 06:52:29.510754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.979 [2024-12-06 06:52:29.510807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:16.979 [2024-12-06 06:52:29.510825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.558 ms 00:26:16.979 [2024-12-06 06:52:29.510835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.979 [2024-12-06 06:52:29.510955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.979 [2024-12-06 06:52:29.510968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:16.979 [2024-12-06 06:52:29.510978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:16.979 [2024-12-06 06:52:29.510987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.979 [2024-12-06 06:52:29.511321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.979 [2024-12-06 06:52:29.511338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:16.979 [2024-12-06 06:52:29.511354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:26:16.979 [2024-12-06 06:52:29.511363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.979 [2024-12-06 06:52:29.511544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.979 [2024-12-06 06:52:29.511556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:16.979 [2024-12-06 06:52:29.511565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.153 ms 00:26:16.979 [2024-12-06 06:52:29.511574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.979 [2024-12-06 06:52:29.525020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.979 [2024-12-06 06:52:29.525055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:16.979 [2024-12-06 06:52:29.525066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.425 ms 00:26:16.979 [2024-12-06 06:52:29.525074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.979 [2024-12-06 06:52:29.537837] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:26:16.980 [2024-12-06 06:52:29.537873] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:16.980 [2024-12-06 06:52:29.537886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.980 [2024-12-06 06:52:29.537895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:16.980 [2024-12-06 06:52:29.537904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.707 ms 00:26:16.980 [2024-12-06 06:52:29.537912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.980 [2024-12-06 06:52:29.562145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.980 [2024-12-06 06:52:29.562332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:16.980 [2024-12-06 06:52:29.562351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.154 ms 00:26:16.980 [2024-12-06 06:52:29.562360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.980 [2024-12-06 06:52:29.573944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.980 [2024-12-06 06:52:29.574054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:16.980 [2024-12-06 06:52:29.574103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.505 ms 00:26:16.980 [2024-12-06 06:52:29.574125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.980 [2024-12-06 06:52:29.585940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.980 [2024-12-06 06:52:29.586053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:16.980 [2024-12-06 06:52:29.586102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.738 ms 00:26:16.980 [2024-12-06 06:52:29.586123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.980 [2024-12-06 06:52:29.587035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.980 [2024-12-06 06:52:29.587160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:16.980 [2024-12-06 06:52:29.587216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.522 ms 00:26:16.980 [2024-12-06 06:52:29.587239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.980 [2024-12-06 06:52:29.643041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.980 [2024-12-06 06:52:29.643230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:16.980 [2024-12-06 06:52:29.643285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.761 ms 00:26:16.980 [2024-12-06 06:52:29.643308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.980 [2024-12-06 06:52:29.663057] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:26:16.980 [2024-12-06 06:52:29.677744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.980 [2024-12-06 06:52:29.677876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:16.980 [2024-12-06 06:52:29.677930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.802 ms 00:26:16.980 [2024-12-06 06:52:29.677958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.980 [2024-12-06 06:52:29.678063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.980 [2024-12-06 06:52:29.678090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:16.980 [2024-12-06 06:52:29.678110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:26:16.980 [2024-12-06 06:52:29.678129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.980 [2024-12-06 06:52:29.678190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.980 [2024-12-06 06:52:29.678292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:16.980 [2024-12-06 06:52:29.678312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:26:16.980 [2024-12-06 06:52:29.678334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.980 [2024-12-06 06:52:29.678376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.980 [2024-12-06 06:52:29.678398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:16.980 [2024-12-06 06:52:29.678485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:16.980 [2024-12-06 06:52:29.678510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.980 [2024-12-06 06:52:29.678560] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:16.980 [2024-12-06 06:52:29.678631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.980 [2024-12-06 06:52:29.679046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:16.980 [2024-12-06 06:52:29.679094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:26:16.980 [2024-12-06 06:52:29.679149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.980 [2024-12-06 06:52:29.703255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.980 [2024-12-06 06:52:29.703418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:16.980 [2024-12-06 06:52:29.703525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.045 ms 00:26:16.980 [2024-12-06 06:52:29.703566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.980 [2024-12-06 06:52:29.703704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.980 [2024-12-06 06:52:29.703865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:16.980 [2024-12-06 06:52:29.703905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:26:16.980 [2024-12-06 06:52:29.703938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.980 [2024-12-06 06:52:29.705149] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:16.980 [2024-12-06 06:52:29.710320] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 291.971 ms, result 0 00:26:16.980 [2024-12-06 06:52:29.711660] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:17.238 [2024-12-06 06:52:29.727897] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:17.496  [2024-12-06T06:52:30.237Z] Copying: 4096/4096 [kB] (average 10 MBps)[2024-12-06 06:52:30.103592] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:17.496 [2024-12-06 06:52:30.113059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.496 [2024-12-06 06:52:30.113192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:17.496 [2024-12-06 06:52:30.113284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:17.496 [2024-12-06 06:52:30.113302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.496 [2024-12-06 06:52:30.113339] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:26:17.496 [2024-12-06 06:52:30.116025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.496 [2024-12-06 06:52:30.116058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:17.496 [2024-12-06 06:52:30.116074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.666 ms 00:26:17.496 [2024-12-06 06:52:30.116086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.496 [2024-12-06 06:52:30.118719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.496 [2024-12-06 06:52:30.118829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:17.496 [2024-12-06 06:52:30.118849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.599 ms 00:26:17.496 [2024-12-06 06:52:30.118860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.496 [2024-12-06 06:52:30.123521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.496 [2024-12-06 06:52:30.123617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:17.496 [2024-12-06 06:52:30.123691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.629 ms 00:26:17.496 [2024-12-06 06:52:30.123730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.496 [2024-12-06 06:52:30.130747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.496 [2024-12-06 06:52:30.130854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:17.496 [2024-12-06 06:52:30.130932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.956 ms 00:26:17.496 [2024-12-06 06:52:30.130971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.496 [2024-12-06 06:52:30.154425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.496 [2024-12-06 06:52:30.154554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:17.496 [2024-12-06 06:52:30.154627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.375 ms 00:26:17.496 [2024-12-06 06:52:30.154663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.496 [2024-12-06 06:52:30.168474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.496 [2024-12-06 06:52:30.168587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:17.496 [2024-12-06 06:52:30.168657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.749 ms 00:26:17.496 [2024-12-06 06:52:30.168695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.497 [2024-12-06 06:52:30.168877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.497 [2024-12-06 06:52:30.168990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:17.497 [2024-12-06 06:52:30.169034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:26:17.497 [2024-12-06 06:52:30.169137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.497 [2024-12-06 06:52:30.192556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.497 [2024-12-06 06:52:30.192668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:17.497 [2024-12-06 06:52:30.192737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.368 ms 00:26:17.497 [2024-12-06 06:52:30.192773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.497 [2024-12-06 06:52:30.215710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.497 [2024-12-06 06:52:30.215816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:17.497 [2024-12-06 06:52:30.215885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.858 ms 00:26:17.497 [2024-12-06 06:52:30.215921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.755 [2024-12-06 06:52:30.238741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.755 [2024-12-06 06:52:30.238849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:17.755 [2024-12-06 06:52:30.238917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.758 ms 00:26:17.755 [2024-12-06 06:52:30.238951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.755 [2024-12-06 06:52:30.261993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.755 [2024-12-06 06:52:30.262107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:17.755 [2024-12-06 06:52:30.262178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.948 ms 00:26:17.755 [2024-12-06 06:52:30.262213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.755 [2024-12-06 06:52:30.262330] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:17.755 [2024-12-06 06:52:30.262376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:17.755 [2024-12-06 06:52:30.262428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:17.755 [2024-12-06 06:52:30.262503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:17.755 [2024-12-06 06:52:30.262559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:17.755 [2024-12-06 06:52:30.262673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:17.755 [2024-12-06 06:52:30.262725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:17.755 [2024-12-06 06:52:30.262776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:17.755 [2024-12-06 06:52:30.262877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:17.755 [2024-12-06 06:52:30.262929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:17.755 [2024-12-06 06:52:30.263021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:17.755 [2024-12-06 06:52:30.263077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:17.755 [2024-12-06 06:52:30.263165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:17.755 [2024-12-06 06:52:30.263223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:17.755 [2024-12-06 06:52:30.263321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:17.755 [2024-12-06 06:52:30.263377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:17.755 [2024-12-06 06:52:30.263493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:17.755 [2024-12-06 06:52:30.263546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:17.755 [2024-12-06 06:52:30.263703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:17.755 [2024-12-06 06:52:30.263757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:17.755 [2024-12-06 06:52:30.263808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:17.755 [2024-12-06 06:52:30.263861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:17.755 [2024-12-06 06:52:30.263916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:17.755 [2024-12-06 06:52:30.264060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:17.755 [2024-12-06 06:52:30.264447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:17.755 [2024-12-06 06:52:30.264551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:17.755 [2024-12-06 06:52:30.264605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:17.755 [2024-12-06 06:52:30.264698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:17.755 [2024-12-06 06:52:30.264750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:17.755 [2024-12-06 06:52:30.264801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:17.755 [2024-12-06 06:52:30.264856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:17.755 [2024-12-06 06:52:30.264909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:17.755 [2024-12-06 06:52:30.265262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:17.755 [2024-12-06 06:52:30.265276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:17.755 [2024-12-06 06:52:30.265290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:17.755 [2024-12-06 06:52:30.265303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:17.755 [2024-12-06 06:52:30.265316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:17.755 [2024-12-06 06:52:30.265329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:17.756 [2024-12-06 06:52:30.265341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:17.756 [2024-12-06 06:52:30.265354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:17.756 [2024-12-06 06:52:30.265367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:17.756 [2024-12-06 06:52:30.265380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:17.756 [2024-12-06 06:52:30.265392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:17.756 [2024-12-06 06:52:30.265405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:17.756 [2024-12-06 06:52:30.265418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:17.756 [2024-12-06 06:52:30.265431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:17.756 [2024-12-06 06:52:30.265443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:17.756 [2024-12-06 06:52:30.265455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:17.756 [2024-12-06 06:52:30.265494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:17.756 [2024-12-06 06:52:30.265508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:17.756 [2024-12-06 06:52:30.265521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:17.756 [2024-12-06 06:52:30.265534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:17.756 [2024-12-06 06:52:30.265547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:17.756 [2024-12-06 06:52:30.265560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:17.756 [2024-12-06 06:52:30.265573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:17.756 [2024-12-06 06:52:30.265586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:17.756 [2024-12-06 06:52:30.265599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:17.756 [2024-12-06 06:52:30.265613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:17.756 [2024-12-06 06:52:30.265625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:17.756 [2024-12-06 06:52:30.265638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:17.756 [2024-12-06 06:52:30.265651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:17.756 [2024-12-06 06:52:30.265664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:17.756 [2024-12-06 06:52:30.265678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:17.756 [2024-12-06 06:52:30.265692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:17.756 [2024-12-06 06:52:30.265705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:17.756 [2024-12-06 06:52:30.265718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:17.756 [2024-12-06 06:52:30.265730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:17.756 [2024-12-06 06:52:30.265743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:17.756 [2024-12-06 06:52:30.265756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:17.756 [2024-12-06 06:52:30.265769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:17.756 [2024-12-06 06:52:30.265782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:17.756 [2024-12-06 06:52:30.265795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:17.756 [2024-12-06 06:52:30.265808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:17.756 [2024-12-06 06:52:30.265822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:17.756 [2024-12-06 06:52:30.265835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:17.756 [2024-12-06 06:52:30.265847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:17.756 [2024-12-06 06:52:30.265861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:17.756 [2024-12-06 06:52:30.265874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:17.756 [2024-12-06 06:52:30.265886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:17.756 [2024-12-06 06:52:30.265898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:17.756 [2024-12-06 06:52:30.265910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:17.756 [2024-12-06 06:52:30.265923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:17.756 [2024-12-06 06:52:30.265937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:17.756 [2024-12-06 06:52:30.265949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:17.756 [2024-12-06 06:52:30.265963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:17.756 [2024-12-06 06:52:30.265975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:17.756 [2024-12-06 06:52:30.265987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:17.756 [2024-12-06 06:52:30.266000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:17.756 [2024-12-06 06:52:30.266013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:17.756 [2024-12-06 06:52:30.266025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:17.756 [2024-12-06 06:52:30.266038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:17.756 [2024-12-06 06:52:30.266051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:17.756 [2024-12-06 06:52:30.266063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:17.756 [2024-12-06 06:52:30.266076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:17.756 [2024-12-06 06:52:30.266097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:17.756 [2024-12-06 06:52:30.266111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:17.756 [2024-12-06 06:52:30.266123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:17.756 [2024-12-06 06:52:30.266137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:17.756 [2024-12-06 06:52:30.266156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:17.756 [2024-12-06 06:52:30.266168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:17.756 [2024-12-06 06:52:30.266186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:17.756 [2024-12-06 06:52:30.266209] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:17.756 [2024-12-06 06:52:30.266228] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5ee066fa-e7bc-4a33-b1a8-f35f9ed69a0f 00:26:17.756 [2024-12-06 06:52:30.266241] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:17.756 [2024-12-06 06:52:30.266253] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:17.756 [2024-12-06 06:52:30.266264] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:17.756 [2024-12-06 06:52:30.266276] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:17.756 [2024-12-06 06:52:30.266288] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:17.756 [2024-12-06 06:52:30.266300] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:17.756 [2024-12-06 06:52:30.266316] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:17.756 [2024-12-06 06:52:30.266327] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:17.756 [2024-12-06 06:52:30.266337] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:17.756 [2024-12-06 06:52:30.266349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.756 [2024-12-06 06:52:30.266361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:17.756 [2024-12-06 06:52:30.266376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.020 ms 00:26:17.756 [2024-12-06 06:52:30.266389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.756 [2024-12-06 06:52:30.279750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.756 [2024-12-06 06:52:30.279787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:17.756 [2024-12-06 06:52:30.279804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.315 ms 00:26:17.756 [2024-12-06 06:52:30.279816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.756 [2024-12-06 06:52:30.280259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.756 [2024-12-06 06:52:30.280289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:17.756 [2024-12-06 06:52:30.280302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.393 ms 00:26:17.756 [2024-12-06 06:52:30.280314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.756 [2024-12-06 06:52:30.315615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.756 [2024-12-06 06:52:30.315651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:17.756 [2024-12-06 06:52:30.315666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.756 [2024-12-06 06:52:30.315682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.756 [2024-12-06 06:52:30.315781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.756 [2024-12-06 06:52:30.315795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:17.756 [2024-12-06 06:52:30.315808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.756 [2024-12-06 06:52:30.315820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.756 [2024-12-06 06:52:30.315877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.756 [2024-12-06 06:52:30.315891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:17.757 [2024-12-06 06:52:30.315905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.757 [2024-12-06 06:52:30.315917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.757 [2024-12-06 06:52:30.315947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.757 [2024-12-06 06:52:30.315960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:17.757 [2024-12-06 06:52:30.315973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.757 [2024-12-06 06:52:30.315985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.757 [2024-12-06 06:52:30.391824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.757 [2024-12-06 06:52:30.391878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:17.757 [2024-12-06 06:52:30.391894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.757 [2024-12-06 06:52:30.391909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.757 [2024-12-06 06:52:30.453918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.757 [2024-12-06 06:52:30.453973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:17.757 [2024-12-06 06:52:30.453990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.757 [2024-12-06 06:52:30.454002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.757 [2024-12-06 06:52:30.454073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.757 [2024-12-06 06:52:30.454086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:17.757 [2024-12-06 06:52:30.454097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.757 [2024-12-06 06:52:30.454108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.757 [2024-12-06 06:52:30.454145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.757 [2024-12-06 06:52:30.454164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:17.757 [2024-12-06 06:52:30.454177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.757 [2024-12-06 06:52:30.454189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.757 [2024-12-06 06:52:30.454316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.757 [2024-12-06 06:52:30.454331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:17.757 [2024-12-06 06:52:30.454344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.757 [2024-12-06 06:52:30.454358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.757 [2024-12-06 06:52:30.454403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.757 [2024-12-06 06:52:30.454417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:17.757 [2024-12-06 06:52:30.454434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.757 [2024-12-06 06:52:30.454446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.757 [2024-12-06 06:52:30.454522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.757 [2024-12-06 06:52:30.454539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:17.757 [2024-12-06 06:52:30.454551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.757 [2024-12-06 06:52:30.454564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.757 [2024-12-06 06:52:30.454621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:17.757 [2024-12-06 06:52:30.454657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:17.757 [2024-12-06 06:52:30.454670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:17.757 [2024-12-06 06:52:30.454681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.757 [2024-12-06 06:52:30.454859] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 341.775 ms, result 0 00:26:18.690 00:26:18.690 00:26:18.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:18.690 06:52:31 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=77248 00:26:18.690 06:52:31 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 77248 00:26:18.690 06:52:31 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 77248 ']' 00:26:18.690 06:52:31 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:26:18.690 06:52:31 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:18.690 06:52:31 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:18.690 06:52:31 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:18.691 06:52:31 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:18.691 06:52:31 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:26:18.691 [2024-12-06 06:52:31.276243] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:26:18.691 [2024-12-06 06:52:31.276370] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77248 ] 00:26:18.948 [2024-12-06 06:52:31.439130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:18.948 [2024-12-06 06:52:31.539259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:19.513 06:52:32 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:19.513 06:52:32 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:26:19.513 06:52:32 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:26:19.771 [2024-12-06 06:52:32.326691] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:19.771 [2024-12-06 06:52:32.326751] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:19.771 [2024-12-06 06:52:32.501532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.771 [2024-12-06 06:52:32.501588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:19.771 [2024-12-06 06:52:32.501603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:19.771 [2024-12-06 06:52:32.501611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.771 [2024-12-06 06:52:32.504224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.771 [2024-12-06 06:52:32.504262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:19.771 [2024-12-06 06:52:32.504274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.594 ms 00:26:19.771 [2024-12-06 06:52:32.504281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.771 [2024-12-06 06:52:32.504353] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:19.771 [2024-12-06 06:52:32.505006] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:19.771 [2024-12-06 06:52:32.505030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.771 [2024-12-06 06:52:32.505038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:19.771 [2024-12-06 06:52:32.505048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.686 ms 00:26:19.771 [2024-12-06 06:52:32.505055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.771 [2024-12-06 06:52:32.506278] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:20.031 [2024-12-06 06:52:32.518742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.031 [2024-12-06 06:52:32.518781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:20.031 [2024-12-06 06:52:32.518794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.468 ms 00:26:20.031 [2024-12-06 06:52:32.518803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.031 [2024-12-06 06:52:32.518886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.031 [2024-12-06 06:52:32.518899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:20.031 [2024-12-06 06:52:32.518908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:26:20.031 [2024-12-06 06:52:32.518917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.031 [2024-12-06 06:52:32.523985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.031 [2024-12-06 06:52:32.524023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:20.031 [2024-12-06 06:52:32.524032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.021 ms 00:26:20.031 [2024-12-06 06:52:32.524041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.031 [2024-12-06 06:52:32.524135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.031 [2024-12-06 06:52:32.524147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:20.031 [2024-12-06 06:52:32.524155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:26:20.031 [2024-12-06 06:52:32.524167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.031 [2024-12-06 06:52:32.524190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.031 [2024-12-06 06:52:32.524201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:20.031 [2024-12-06 06:52:32.524208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:20.031 [2024-12-06 06:52:32.524217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.031 [2024-12-06 06:52:32.524239] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:26:20.031 [2024-12-06 06:52:32.527562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.031 [2024-12-06 06:52:32.527591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:20.031 [2024-12-06 06:52:32.527602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.326 ms 00:26:20.031 [2024-12-06 06:52:32.527610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.031 [2024-12-06 06:52:32.527648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.031 [2024-12-06 06:52:32.527656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:20.031 [2024-12-06 06:52:32.527665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:20.031 [2024-12-06 06:52:32.527674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.031 [2024-12-06 06:52:32.527695] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:20.031 [2024-12-06 06:52:32.527713] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:20.031 [2024-12-06 06:52:32.527755] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:20.031 [2024-12-06 06:52:32.527770] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:20.031 [2024-12-06 06:52:32.527874] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:20.031 [2024-12-06 06:52:32.527884] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:20.031 [2024-12-06 06:52:32.527898] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:20.031 [2024-12-06 06:52:32.527907] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:20.031 [2024-12-06 06:52:32.527917] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:20.031 [2024-12-06 06:52:32.527925] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:26:20.031 [2024-12-06 06:52:32.527934] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:20.031 [2024-12-06 06:52:32.527940] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:20.031 [2024-12-06 06:52:32.527950] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:20.031 [2024-12-06 06:52:32.527957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.031 [2024-12-06 06:52:32.527966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:20.031 [2024-12-06 06:52:32.527973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:26:20.031 [2024-12-06 06:52:32.527981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.031 [2024-12-06 06:52:32.528069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.031 [2024-12-06 06:52:32.528079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:20.031 [2024-12-06 06:52:32.528086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:26:20.031 [2024-12-06 06:52:32.528094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.031 [2024-12-06 06:52:32.528205] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:20.031 [2024-12-06 06:52:32.528217] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:20.031 [2024-12-06 06:52:32.528225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:20.031 [2024-12-06 06:52:32.528234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:20.031 [2024-12-06 06:52:32.528242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:20.031 [2024-12-06 06:52:32.528251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:20.031 [2024-12-06 06:52:32.528258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:26:20.031 [2024-12-06 06:52:32.528267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:20.032 [2024-12-06 06:52:32.528274] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:26:20.032 [2024-12-06 06:52:32.528282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:20.032 [2024-12-06 06:52:32.528289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:20.032 [2024-12-06 06:52:32.528297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:26:20.032 [2024-12-06 06:52:32.528305] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:20.032 [2024-12-06 06:52:32.528313] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:20.032 [2024-12-06 06:52:32.528321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:26:20.032 [2024-12-06 06:52:32.528329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:20.032 [2024-12-06 06:52:32.528336] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:20.032 [2024-12-06 06:52:32.528344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:26:20.032 [2024-12-06 06:52:32.528355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:20.032 [2024-12-06 06:52:32.528363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:20.032 [2024-12-06 06:52:32.528370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:26:20.032 [2024-12-06 06:52:32.528378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:20.032 [2024-12-06 06:52:32.528384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:20.032 [2024-12-06 06:52:32.528394] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:26:20.032 [2024-12-06 06:52:32.528400] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:20.032 [2024-12-06 06:52:32.528408] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:20.032 [2024-12-06 06:52:32.528415] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:26:20.032 [2024-12-06 06:52:32.528422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:20.032 [2024-12-06 06:52:32.528430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:20.032 [2024-12-06 06:52:32.528439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:26:20.032 [2024-12-06 06:52:32.528446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:20.032 [2024-12-06 06:52:32.528454] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:20.032 [2024-12-06 06:52:32.528471] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:26:20.032 [2024-12-06 06:52:32.528481] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:20.032 [2024-12-06 06:52:32.528488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:20.032 [2024-12-06 06:52:32.528496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:26:20.032 [2024-12-06 06:52:32.528502] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:20.032 [2024-12-06 06:52:32.528510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:20.032 [2024-12-06 06:52:32.528516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:26:20.032 [2024-12-06 06:52:32.528525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:20.032 [2024-12-06 06:52:32.528532] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:20.032 [2024-12-06 06:52:32.528540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:26:20.032 [2024-12-06 06:52:32.528546] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:20.032 [2024-12-06 06:52:32.528554] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:20.032 [2024-12-06 06:52:32.528563] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:20.032 [2024-12-06 06:52:32.528571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:20.032 [2024-12-06 06:52:32.528579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:20.032 [2024-12-06 06:52:32.528588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:20.032 [2024-12-06 06:52:32.528594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:20.032 [2024-12-06 06:52:32.528602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:20.032 [2024-12-06 06:52:32.528609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:20.032 [2024-12-06 06:52:32.528617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:20.032 [2024-12-06 06:52:32.528624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:20.032 [2024-12-06 06:52:32.528633] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:20.032 [2024-12-06 06:52:32.528642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:20.032 [2024-12-06 06:52:32.528655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:26:20.032 [2024-12-06 06:52:32.528662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:26:20.032 [2024-12-06 06:52:32.528671] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:26:20.032 [2024-12-06 06:52:32.528678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:26:20.032 [2024-12-06 06:52:32.528687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:26:20.032 [2024-12-06 06:52:32.528693] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:26:20.032 [2024-12-06 06:52:32.528702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:26:20.032 [2024-12-06 06:52:32.528709] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:26:20.032 [2024-12-06 06:52:32.528718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:26:20.032 [2024-12-06 06:52:32.528725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:26:20.032 [2024-12-06 06:52:32.528734] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:26:20.032 [2024-12-06 06:52:32.528741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:26:20.032 [2024-12-06 06:52:32.528749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:26:20.032 [2024-12-06 06:52:32.528756] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:26:20.032 [2024-12-06 06:52:32.528764] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:20.032 [2024-12-06 06:52:32.528772] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:20.032 [2024-12-06 06:52:32.528783] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:20.032 [2024-12-06 06:52:32.528790] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:20.032 [2024-12-06 06:52:32.528798] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:20.032 [2024-12-06 06:52:32.528805] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:20.032 [2024-12-06 06:52:32.528814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.032 [2024-12-06 06:52:32.528821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:20.032 [2024-12-06 06:52:32.528830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.676 ms 00:26:20.032 [2024-12-06 06:52:32.528839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.032 [2024-12-06 06:52:32.554874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.032 [2024-12-06 06:52:32.554909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:20.032 [2024-12-06 06:52:32.554921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.962 ms 00:26:20.032 [2024-12-06 06:52:32.554930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.032 [2024-12-06 06:52:32.555046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.032 [2024-12-06 06:52:32.555055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:20.032 [2024-12-06 06:52:32.555065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:26:20.032 [2024-12-06 06:52:32.555072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.032 [2024-12-06 06:52:32.585254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.032 [2024-12-06 06:52:32.585291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:20.032 [2024-12-06 06:52:32.585303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.159 ms 00:26:20.032 [2024-12-06 06:52:32.585310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.032 [2024-12-06 06:52:32.585369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.032 [2024-12-06 06:52:32.585378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:20.032 [2024-12-06 06:52:32.585388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:20.032 [2024-12-06 06:52:32.585395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.032 [2024-12-06 06:52:32.585727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.032 [2024-12-06 06:52:32.585745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:20.032 [2024-12-06 06:52:32.585758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:26:20.032 [2024-12-06 06:52:32.585766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.032 [2024-12-06 06:52:32.585889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.032 [2024-12-06 06:52:32.585897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:20.032 [2024-12-06 06:52:32.585907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:26:20.032 [2024-12-06 06:52:32.585914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.032 [2024-12-06 06:52:32.600063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.032 [2024-12-06 06:52:32.600093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:20.032 [2024-12-06 06:52:32.600104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.127 ms 00:26:20.032 [2024-12-06 06:52:32.600112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.032 [2024-12-06 06:52:32.631661] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:20.033 [2024-12-06 06:52:32.631703] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:20.033 [2024-12-06 06:52:32.631720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.033 [2024-12-06 06:52:32.631730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:20.033 [2024-12-06 06:52:32.631743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.498 ms 00:26:20.033 [2024-12-06 06:52:32.631757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.033 [2024-12-06 06:52:32.655698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.033 [2024-12-06 06:52:32.655734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:20.033 [2024-12-06 06:52:32.655745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.860 ms 00:26:20.033 [2024-12-06 06:52:32.655754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.033 [2024-12-06 06:52:32.667336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.033 [2024-12-06 06:52:32.667368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:20.033 [2024-12-06 06:52:32.667381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.512 ms 00:26:20.033 [2024-12-06 06:52:32.667388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.033 [2024-12-06 06:52:32.678939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.033 [2024-12-06 06:52:32.678972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:20.033 [2024-12-06 06:52:32.678984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.475 ms 00:26:20.033 [2024-12-06 06:52:32.678991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.033 [2024-12-06 06:52:32.679628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.033 [2024-12-06 06:52:32.679648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:20.033 [2024-12-06 06:52:32.679659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.549 ms 00:26:20.033 [2024-12-06 06:52:32.679666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.033 [2024-12-06 06:52:32.734765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.033 [2024-12-06 06:52:32.734817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:20.033 [2024-12-06 06:52:32.734832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.074 ms 00:26:20.033 [2024-12-06 06:52:32.734840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.033 [2024-12-06 06:52:32.745296] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:26:20.033 [2024-12-06 06:52:32.759486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.033 [2024-12-06 06:52:32.759532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:20.033 [2024-12-06 06:52:32.759547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.544 ms 00:26:20.033 [2024-12-06 06:52:32.759558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.033 [2024-12-06 06:52:32.759642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.033 [2024-12-06 06:52:32.759654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:20.033 [2024-12-06 06:52:32.759662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:20.033 [2024-12-06 06:52:32.759671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.033 [2024-12-06 06:52:32.759718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.033 [2024-12-06 06:52:32.759728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:20.033 [2024-12-06 06:52:32.759736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:26:20.033 [2024-12-06 06:52:32.759747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.033 [2024-12-06 06:52:32.759769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.033 [2024-12-06 06:52:32.759779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:20.033 [2024-12-06 06:52:32.759786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:20.033 [2024-12-06 06:52:32.759797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.033 [2024-12-06 06:52:32.759828] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:20.033 [2024-12-06 06:52:32.759841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.033 [2024-12-06 06:52:32.759850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:20.033 [2024-12-06 06:52:32.759859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:20.033 [2024-12-06 06:52:32.759867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.291 [2024-12-06 06:52:32.783699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.291 [2024-12-06 06:52:32.783739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:20.291 [2024-12-06 06:52:32.783752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.807 ms 00:26:20.291 [2024-12-06 06:52:32.783760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.291 [2024-12-06 06:52:32.783848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.291 [2024-12-06 06:52:32.783858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:20.291 [2024-12-06 06:52:32.783867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:26:20.291 [2024-12-06 06:52:32.783877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.291 [2024-12-06 06:52:32.784718] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:20.291 [2024-12-06 06:52:32.787636] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 282.901 ms, result 0 00:26:20.291 [2024-12-06 06:52:32.789897] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:20.291 Some configs were skipped because the RPC state that can call them passed over. 00:26:20.291 06:52:32 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:26:20.291 [2024-12-06 06:52:33.017200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.291 [2024-12-06 06:52:33.017251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:26:20.291 [2024-12-06 06:52:33.017265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.665 ms 00:26:20.292 [2024-12-06 06:52:33.017276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.292 [2024-12-06 06:52:33.017308] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.775 ms, result 0 00:26:20.292 true 00:26:20.550 06:52:33 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:26:20.550 [2024-12-06 06:52:33.241956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.550 [2024-12-06 06:52:33.242002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:26:20.550 [2024-12-06 06:52:33.242016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.150 ms 00:26:20.550 [2024-12-06 06:52:33.242024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.550 [2024-12-06 06:52:33.242061] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.257 ms, result 0 00:26:20.550 true 00:26:20.550 06:52:33 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 77248 00:26:20.550 06:52:33 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 77248 ']' 00:26:20.550 06:52:33 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 77248 00:26:20.550 06:52:33 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:26:20.550 06:52:33 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:20.550 06:52:33 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77248 00:26:20.550 killing process with pid 77248 00:26:20.550 06:52:33 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:20.550 06:52:33 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:20.550 06:52:33 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77248' 00:26:20.550 06:52:33 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 77248 00:26:20.550 06:52:33 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 77248 00:26:21.486 [2024-12-06 06:52:33.984805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.486 [2024-12-06 06:52:33.984868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:21.486 [2024-12-06 06:52:33.984881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:21.486 [2024-12-06 06:52:33.984891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.486 [2024-12-06 06:52:33.984915] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:26:21.486 [2024-12-06 06:52:33.987573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.486 [2024-12-06 06:52:33.987607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:21.486 [2024-12-06 06:52:33.987620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.641 ms 00:26:21.486 [2024-12-06 06:52:33.987629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.486 [2024-12-06 06:52:33.987918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.486 [2024-12-06 06:52:33.987928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:21.486 [2024-12-06 06:52:33.987938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:26:21.486 [2024-12-06 06:52:33.987947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.486 [2024-12-06 06:52:33.992736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.486 [2024-12-06 06:52:33.992766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:21.486 [2024-12-06 06:52:33.992778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.767 ms 00:26:21.486 [2024-12-06 06:52:33.992785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.486 [2024-12-06 06:52:33.999660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.486 [2024-12-06 06:52:33.999689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:21.486 [2024-12-06 06:52:33.999703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.840 ms 00:26:21.486 [2024-12-06 06:52:33.999716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.486 [2024-12-06 06:52:34.010129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.486 [2024-12-06 06:52:34.010170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:21.486 [2024-12-06 06:52:34.010184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.361 ms 00:26:21.486 [2024-12-06 06:52:34.010191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.486 [2024-12-06 06:52:34.017040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.486 [2024-12-06 06:52:34.017075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:21.486 [2024-12-06 06:52:34.017087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.812 ms 00:26:21.487 [2024-12-06 06:52:34.017096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.487 [2024-12-06 06:52:34.017223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.487 [2024-12-06 06:52:34.017233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:21.487 [2024-12-06 06:52:34.017244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:26:21.487 [2024-12-06 06:52:34.017252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.487 [2024-12-06 06:52:34.027841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.487 [2024-12-06 06:52:34.027884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:21.487 [2024-12-06 06:52:34.027901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.567 ms 00:26:21.487 [2024-12-06 06:52:34.027911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.487 [2024-12-06 06:52:34.038178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.487 [2024-12-06 06:52:34.038210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:21.487 [2024-12-06 06:52:34.038226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.225 ms 00:26:21.487 [2024-12-06 06:52:34.038234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.487 [2024-12-06 06:52:34.047993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.487 [2024-12-06 06:52:34.048024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:21.487 [2024-12-06 06:52:34.048036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.722 ms 00:26:21.487 [2024-12-06 06:52:34.048044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.487 [2024-12-06 06:52:34.057952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.487 [2024-12-06 06:52:34.057983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:21.487 [2024-12-06 06:52:34.057994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.844 ms 00:26:21.487 [2024-12-06 06:52:34.058002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.487 [2024-12-06 06:52:34.058036] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:21.487 [2024-12-06 06:52:34.058050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:21.487 [2024-12-06 06:52:34.058061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:21.487 [2024-12-06 06:52:34.058069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:21.487 [2024-12-06 06:52:34.058078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:21.487 [2024-12-06 06:52:34.058085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:21.487 [2024-12-06 06:52:34.058096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:21.487 [2024-12-06 06:52:34.058103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:21.487 [2024-12-06 06:52:34.058112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:21.487 [2024-12-06 06:52:34.058119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:21.487 [2024-12-06 06:52:34.058128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:21.487 [2024-12-06 06:52:34.058135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:21.487 [2024-12-06 06:52:34.058144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:21.487 [2024-12-06 06:52:34.058152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:21.487 [2024-12-06 06:52:34.058162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:21.487 [2024-12-06 06:52:34.058169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:21.487 [2024-12-06 06:52:34.058178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:21.487 [2024-12-06 06:52:34.058186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:21.487 [2024-12-06 06:52:34.058195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:21.487 [2024-12-06 06:52:34.058203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:21.487 [2024-12-06 06:52:34.058211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:21.487 [2024-12-06 06:52:34.058219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:21.487 [2024-12-06 06:52:34.058229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:21.487 [2024-12-06 06:52:34.058236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:21.487 [2024-12-06 06:52:34.058245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:21.487 [2024-12-06 06:52:34.058252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:21.487 [2024-12-06 06:52:34.058260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:21.487 [2024-12-06 06:52:34.058267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:21.487 [2024-12-06 06:52:34.058276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:21.487 [2024-12-06 06:52:34.058284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:21.487 [2024-12-06 06:52:34.058293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:21.487 [2024-12-06 06:52:34.058301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:21.487 [2024-12-06 06:52:34.058310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:21.487 [2024-12-06 06:52:34.058317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:21.487 [2024-12-06 06:52:34.058325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:21.487 [2024-12-06 06:52:34.058332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:21.487 [2024-12-06 06:52:34.058341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:21.487 [2024-12-06 06:52:34.058349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:21.487 [2024-12-06 06:52:34.058359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:21.487 [2024-12-06 06:52:34.058366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:21.487 [2024-12-06 06:52:34.058376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:21.487 [2024-12-06 06:52:34.058383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:21.487 [2024-12-06 06:52:34.058392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:21.487 [2024-12-06 06:52:34.058399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:21.487 [2024-12-06 06:52:34.058408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:21.487 [2024-12-06 06:52:34.058415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:21.487 [2024-12-06 06:52:34.058424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:21.487 [2024-12-06 06:52:34.058431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:21.487 [2024-12-06 06:52:34.058439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:21.487 [2024-12-06 06:52:34.058446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:21.487 [2024-12-06 06:52:34.058455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:21.487 [2024-12-06 06:52:34.058472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:21.487 [2024-12-06 06:52:34.058481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:21.487 [2024-12-06 06:52:34.058489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:21.487 [2024-12-06 06:52:34.058499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:21.487 [2024-12-06 06:52:34.058506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:21.487 [2024-12-06 06:52:34.058514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:21.487 [2024-12-06 06:52:34.058522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:21.487 [2024-12-06 06:52:34.058531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:21.487 [2024-12-06 06:52:34.058538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:21.487 [2024-12-06 06:52:34.058546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:21.487 [2024-12-06 06:52:34.058555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:21.487 [2024-12-06 06:52:34.058564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:21.487 [2024-12-06 06:52:34.058572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:21.487 [2024-12-06 06:52:34.058581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:21.487 [2024-12-06 06:52:34.058588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:21.487 [2024-12-06 06:52:34.058597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:21.487 [2024-12-06 06:52:34.058604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:21.487 [2024-12-06 06:52:34.058614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:21.487 [2024-12-06 06:52:34.058622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:21.487 [2024-12-06 06:52:34.058632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:21.487 [2024-12-06 06:52:34.058639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:21.488 [2024-12-06 06:52:34.058648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:21.488 [2024-12-06 06:52:34.058655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:21.488 [2024-12-06 06:52:34.058664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:21.488 [2024-12-06 06:52:34.058671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:21.488 [2024-12-06 06:52:34.058679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:21.488 [2024-12-06 06:52:34.058686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:21.488 [2024-12-06 06:52:34.058695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:21.488 [2024-12-06 06:52:34.058703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:21.488 [2024-12-06 06:52:34.058711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:21.488 [2024-12-06 06:52:34.058718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:21.488 [2024-12-06 06:52:34.058727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:21.488 [2024-12-06 06:52:34.058734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:21.488 [2024-12-06 06:52:34.058743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:21.488 [2024-12-06 06:52:34.058750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:21.488 [2024-12-06 06:52:34.058760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:21.488 [2024-12-06 06:52:34.058767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:21.488 [2024-12-06 06:52:34.058776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:21.488 [2024-12-06 06:52:34.058783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:21.488 [2024-12-06 06:52:34.058791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:21.488 [2024-12-06 06:52:34.058798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:21.488 [2024-12-06 06:52:34.058807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:21.488 [2024-12-06 06:52:34.058815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:21.488 [2024-12-06 06:52:34.058826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:21.488 [2024-12-06 06:52:34.058834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:21.488 [2024-12-06 06:52:34.058842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:21.488 [2024-12-06 06:52:34.058850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:21.488 [2024-12-06 06:52:34.058858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:21.488 [2024-12-06 06:52:34.058865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:21.488 [2024-12-06 06:52:34.058874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:21.488 [2024-12-06 06:52:34.058895] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:21.488 [2024-12-06 06:52:34.058908] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5ee066fa-e7bc-4a33-b1a8-f35f9ed69a0f 00:26:21.488 [2024-12-06 06:52:34.058919] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:21.488 [2024-12-06 06:52:34.058927] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:21.488 [2024-12-06 06:52:34.058934] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:21.488 [2024-12-06 06:52:34.058943] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:21.488 [2024-12-06 06:52:34.058951] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:21.488 [2024-12-06 06:52:34.058959] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:21.488 [2024-12-06 06:52:34.058966] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:21.488 [2024-12-06 06:52:34.058974] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:21.488 [2024-12-06 06:52:34.058980] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:21.488 [2024-12-06 06:52:34.058988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.488 [2024-12-06 06:52:34.058995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:21.488 [2024-12-06 06:52:34.059005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.954 ms 00:26:21.488 [2024-12-06 06:52:34.059012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.488 [2024-12-06 06:52:34.071416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.488 [2024-12-06 06:52:34.071448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:21.488 [2024-12-06 06:52:34.071483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.370 ms 00:26:21.488 [2024-12-06 06:52:34.071491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.488 [2024-12-06 06:52:34.071851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.488 [2024-12-06 06:52:34.071861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:21.488 [2024-12-06 06:52:34.071872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:26:21.488 [2024-12-06 06:52:34.071879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.488 [2024-12-06 06:52:34.116004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:21.488 [2024-12-06 06:52:34.116049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:21.488 [2024-12-06 06:52:34.116061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:21.488 [2024-12-06 06:52:34.116070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.488 [2024-12-06 06:52:34.117300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:21.488 [2024-12-06 06:52:34.117328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:21.488 [2024-12-06 06:52:34.117342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:21.488 [2024-12-06 06:52:34.117351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.488 [2024-12-06 06:52:34.117398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:21.488 [2024-12-06 06:52:34.117408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:21.488 [2024-12-06 06:52:34.117420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:21.488 [2024-12-06 06:52:34.117428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.488 [2024-12-06 06:52:34.117448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:21.488 [2024-12-06 06:52:34.117457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:21.488 [2024-12-06 06:52:34.117483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:21.488 [2024-12-06 06:52:34.117493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.488 [2024-12-06 06:52:34.195139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:21.488 [2024-12-06 06:52:34.195194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:21.488 [2024-12-06 06:52:34.195209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:21.488 [2024-12-06 06:52:34.195217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.746 [2024-12-06 06:52:34.259249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:21.746 [2024-12-06 06:52:34.259304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:21.746 [2024-12-06 06:52:34.259318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:21.746 [2024-12-06 06:52:34.259329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.746 [2024-12-06 06:52:34.259425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:21.746 [2024-12-06 06:52:34.259436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:21.746 [2024-12-06 06:52:34.259448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:21.746 [2024-12-06 06:52:34.259455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.746 [2024-12-06 06:52:34.259501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:21.746 [2024-12-06 06:52:34.259509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:21.746 [2024-12-06 06:52:34.259519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:21.746 [2024-12-06 06:52:34.259526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.746 [2024-12-06 06:52:34.259619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:21.746 [2024-12-06 06:52:34.259628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:21.746 [2024-12-06 06:52:34.259637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:21.746 [2024-12-06 06:52:34.259644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.746 [2024-12-06 06:52:34.259677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:21.746 [2024-12-06 06:52:34.259685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:21.746 [2024-12-06 06:52:34.259694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:21.746 [2024-12-06 06:52:34.259701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.746 [2024-12-06 06:52:34.259739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:21.746 [2024-12-06 06:52:34.259747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:21.746 [2024-12-06 06:52:34.259758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:21.746 [2024-12-06 06:52:34.259766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.746 [2024-12-06 06:52:34.259807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:21.747 [2024-12-06 06:52:34.259816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:21.747 [2024-12-06 06:52:34.259826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:21.747 [2024-12-06 06:52:34.259833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.747 [2024-12-06 06:52:34.259959] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 275.135 ms, result 0 00:26:22.311 06:52:34 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:22.311 [2024-12-06 06:52:35.010859] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:26:22.311 [2024-12-06 06:52:35.010989] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77302 ] 00:26:22.568 [2024-12-06 06:52:35.171325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:22.568 [2024-12-06 06:52:35.272644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:22.825 [2024-12-06 06:52:35.531227] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:22.825 [2024-12-06 06:52:35.531296] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:23.083 [2024-12-06 06:52:35.685642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.083 [2024-12-06 06:52:35.685685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:23.083 [2024-12-06 06:52:35.685698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:23.083 [2024-12-06 06:52:35.685706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.083 [2024-12-06 06:52:35.688316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.083 [2024-12-06 06:52:35.688350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:23.083 [2024-12-06 06:52:35.688361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.595 ms 00:26:23.083 [2024-12-06 06:52:35.688369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.083 [2024-12-06 06:52:35.688438] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:23.083 [2024-12-06 06:52:35.689111] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:23.083 [2024-12-06 06:52:35.689134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.083 [2024-12-06 06:52:35.689142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:23.083 [2024-12-06 06:52:35.689151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.704 ms 00:26:23.083 [2024-12-06 06:52:35.689158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.083 [2024-12-06 06:52:35.690244] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:23.083 [2024-12-06 06:52:35.702579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.083 [2024-12-06 06:52:35.702614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:23.083 [2024-12-06 06:52:35.702625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.336 ms 00:26:23.083 [2024-12-06 06:52:35.702632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.083 [2024-12-06 06:52:35.702716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.083 [2024-12-06 06:52:35.702727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:23.083 [2024-12-06 06:52:35.702736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:26:23.083 [2024-12-06 06:52:35.702743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.083 [2024-12-06 06:52:35.707507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.083 [2024-12-06 06:52:35.707644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:23.083 [2024-12-06 06:52:35.707660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.725 ms 00:26:23.083 [2024-12-06 06:52:35.707668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.083 [2024-12-06 06:52:35.707754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.083 [2024-12-06 06:52:35.707763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:23.083 [2024-12-06 06:52:35.707771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:26:23.083 [2024-12-06 06:52:35.707782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.083 [2024-12-06 06:52:35.707808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.083 [2024-12-06 06:52:35.707816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:23.083 [2024-12-06 06:52:35.707824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:23.083 [2024-12-06 06:52:35.707831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.083 [2024-12-06 06:52:35.707851] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:26:23.083 [2024-12-06 06:52:35.711086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.083 [2024-12-06 06:52:35.711187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:23.083 [2024-12-06 06:52:35.711201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.240 ms 00:26:23.083 [2024-12-06 06:52:35.711208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.083 [2024-12-06 06:52:35.711245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.083 [2024-12-06 06:52:35.711254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:23.084 [2024-12-06 06:52:35.711262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:23.084 [2024-12-06 06:52:35.711269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.084 [2024-12-06 06:52:35.711291] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:23.084 [2024-12-06 06:52:35.711309] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:23.084 [2024-12-06 06:52:35.711343] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:23.084 [2024-12-06 06:52:35.711358] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:23.084 [2024-12-06 06:52:35.711483] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:23.084 [2024-12-06 06:52:35.711494] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:23.084 [2024-12-06 06:52:35.711505] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:23.084 [2024-12-06 06:52:35.711517] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:23.084 [2024-12-06 06:52:35.711526] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:23.084 [2024-12-06 06:52:35.711534] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:26:23.084 [2024-12-06 06:52:35.711542] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:23.084 [2024-12-06 06:52:35.711549] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:23.084 [2024-12-06 06:52:35.711555] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:23.084 [2024-12-06 06:52:35.711563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.084 [2024-12-06 06:52:35.711570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:23.084 [2024-12-06 06:52:35.711578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:26:23.084 [2024-12-06 06:52:35.711585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.084 [2024-12-06 06:52:35.711672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.084 [2024-12-06 06:52:35.711683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:23.084 [2024-12-06 06:52:35.711690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:26:23.084 [2024-12-06 06:52:35.711697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.084 [2024-12-06 06:52:35.711810] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:23.084 [2024-12-06 06:52:35.711821] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:23.084 [2024-12-06 06:52:35.711829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:23.084 [2024-12-06 06:52:35.711837] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:23.084 [2024-12-06 06:52:35.711844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:23.084 [2024-12-06 06:52:35.711851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:23.084 [2024-12-06 06:52:35.711857] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:26:23.084 [2024-12-06 06:52:35.711865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:23.084 [2024-12-06 06:52:35.711872] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:26:23.084 [2024-12-06 06:52:35.711879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:23.084 [2024-12-06 06:52:35.711885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:23.084 [2024-12-06 06:52:35.711897] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:26:23.084 [2024-12-06 06:52:35.711904] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:23.084 [2024-12-06 06:52:35.711911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:23.084 [2024-12-06 06:52:35.711918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:26:23.084 [2024-12-06 06:52:35.711924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:23.084 [2024-12-06 06:52:35.711931] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:23.084 [2024-12-06 06:52:35.711938] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:26:23.084 [2024-12-06 06:52:35.711945] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:23.084 [2024-12-06 06:52:35.711951] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:23.084 [2024-12-06 06:52:35.711958] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:26:23.084 [2024-12-06 06:52:35.711964] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:23.084 [2024-12-06 06:52:35.711971] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:23.084 [2024-12-06 06:52:35.711977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:26:23.084 [2024-12-06 06:52:35.711983] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:23.084 [2024-12-06 06:52:35.711990] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:23.084 [2024-12-06 06:52:35.711997] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:26:23.084 [2024-12-06 06:52:35.712003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:23.084 [2024-12-06 06:52:35.712009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:23.084 [2024-12-06 06:52:35.712016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:26:23.084 [2024-12-06 06:52:35.712022] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:23.084 [2024-12-06 06:52:35.712029] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:23.084 [2024-12-06 06:52:35.712035] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:26:23.084 [2024-12-06 06:52:35.712041] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:23.084 [2024-12-06 06:52:35.712048] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:23.084 [2024-12-06 06:52:35.712054] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:26:23.084 [2024-12-06 06:52:35.712061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:23.084 [2024-12-06 06:52:35.712068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:23.084 [2024-12-06 06:52:35.712075] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:26:23.084 [2024-12-06 06:52:35.712081] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:23.084 [2024-12-06 06:52:35.712088] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:23.084 [2024-12-06 06:52:35.712094] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:26:23.084 [2024-12-06 06:52:35.712100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:23.084 [2024-12-06 06:52:35.712107] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:23.084 [2024-12-06 06:52:35.712114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:23.084 [2024-12-06 06:52:35.712123] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:23.084 [2024-12-06 06:52:35.712130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:23.084 [2024-12-06 06:52:35.712137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:23.084 [2024-12-06 06:52:35.712145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:23.084 [2024-12-06 06:52:35.712152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:23.084 [2024-12-06 06:52:35.712159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:23.084 [2024-12-06 06:52:35.712165] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:23.084 [2024-12-06 06:52:35.712172] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:23.084 [2024-12-06 06:52:35.712179] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:23.084 [2024-12-06 06:52:35.712188] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:23.084 [2024-12-06 06:52:35.712196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:26:23.084 [2024-12-06 06:52:35.712203] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:26:23.084 [2024-12-06 06:52:35.712210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:26:23.084 [2024-12-06 06:52:35.712217] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:26:23.084 [2024-12-06 06:52:35.712224] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:26:23.084 [2024-12-06 06:52:35.712231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:26:23.084 [2024-12-06 06:52:35.712238] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:26:23.084 [2024-12-06 06:52:35.712244] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:26:23.084 [2024-12-06 06:52:35.712251] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:26:23.084 [2024-12-06 06:52:35.712258] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:26:23.084 [2024-12-06 06:52:35.712264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:26:23.084 [2024-12-06 06:52:35.712271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:26:23.084 [2024-12-06 06:52:35.712278] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:26:23.084 [2024-12-06 06:52:35.712285] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:26:23.084 [2024-12-06 06:52:35.712292] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:23.084 [2024-12-06 06:52:35.712300] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:23.084 [2024-12-06 06:52:35.712308] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:23.084 [2024-12-06 06:52:35.712315] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:23.084 [2024-12-06 06:52:35.712322] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:23.085 [2024-12-06 06:52:35.712329] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:23.085 [2024-12-06 06:52:35.712336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.085 [2024-12-06 06:52:35.712345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:23.085 [2024-12-06 06:52:35.712352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.594 ms 00:26:23.085 [2024-12-06 06:52:35.712359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.085 [2024-12-06 06:52:35.737940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.085 [2024-12-06 06:52:35.738050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:23.085 [2024-12-06 06:52:35.738099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.528 ms 00:26:23.085 [2024-12-06 06:52:35.738121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.085 [2024-12-06 06:52:35.738273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.085 [2024-12-06 06:52:35.738327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:23.085 [2024-12-06 06:52:35.738368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:26:23.085 [2024-12-06 06:52:35.738390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.085 [2024-12-06 06:52:35.787550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.085 [2024-12-06 06:52:35.787688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:23.085 [2024-12-06 06:52:35.787756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.125 ms 00:26:23.085 [2024-12-06 06:52:35.787780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.085 [2024-12-06 06:52:35.787880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.085 [2024-12-06 06:52:35.787908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:23.085 [2024-12-06 06:52:35.787928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:23.085 [2024-12-06 06:52:35.787947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.085 [2024-12-06 06:52:35.788269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.085 [2024-12-06 06:52:35.788305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:23.085 [2024-12-06 06:52:35.788332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:26:23.085 [2024-12-06 06:52:35.788401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.085 [2024-12-06 06:52:35.788559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.085 [2024-12-06 06:52:35.789139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:23.085 [2024-12-06 06:52:35.789187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:26:23.085 [2024-12-06 06:52:35.789244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.085 [2024-12-06 06:52:35.802644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.085 [2024-12-06 06:52:35.802748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:23.085 [2024-12-06 06:52:35.802796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.350 ms 00:26:23.085 [2024-12-06 06:52:35.802818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.085 [2024-12-06 06:52:35.815704] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:23.085 [2024-12-06 06:52:35.815820] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:23.085 [2024-12-06 06:52:35.815877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.085 [2024-12-06 06:52:35.815898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:23.085 [2024-12-06 06:52:35.815917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.950 ms 00:26:23.085 [2024-12-06 06:52:35.815935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.343 [2024-12-06 06:52:35.840075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.343 [2024-12-06 06:52:35.840172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:23.343 [2024-12-06 06:52:35.840219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.067 ms 00:26:23.343 [2024-12-06 06:52:35.840240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.343 [2024-12-06 06:52:35.852039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.343 [2024-12-06 06:52:35.852137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:23.343 [2024-12-06 06:52:35.852182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.726 ms 00:26:23.343 [2024-12-06 06:52:35.852203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.343 [2024-12-06 06:52:35.864259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.343 [2024-12-06 06:52:35.864385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:23.343 [2024-12-06 06:52:35.864438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.684 ms 00:26:23.343 [2024-12-06 06:52:35.864460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.343 [2024-12-06 06:52:35.865712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.343 [2024-12-06 06:52:35.865828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:23.343 [2024-12-06 06:52:35.865884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.548 ms 00:26:23.343 [2024-12-06 06:52:35.865907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.343 [2024-12-06 06:52:35.920589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.343 [2024-12-06 06:52:35.920758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:23.343 [2024-12-06 06:52:35.920813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.643 ms 00:26:23.343 [2024-12-06 06:52:35.920836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.343 [2024-12-06 06:52:35.931039] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:26:23.343 [2024-12-06 06:52:35.944643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.343 [2024-12-06 06:52:35.944768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:23.343 [2024-12-06 06:52:35.944817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.704 ms 00:26:23.343 [2024-12-06 06:52:35.944844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.343 [2024-12-06 06:52:35.944940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.343 [2024-12-06 06:52:35.944967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:23.343 [2024-12-06 06:52:35.944987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:26:23.343 [2024-12-06 06:52:35.945006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.343 [2024-12-06 06:52:35.945067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.343 [2024-12-06 06:52:35.945089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:23.343 [2024-12-06 06:52:35.945109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:26:23.343 [2024-12-06 06:52:35.945178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.343 [2024-12-06 06:52:35.945233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.343 [2024-12-06 06:52:35.945256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:23.343 [2024-12-06 06:52:35.945276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:23.343 [2024-12-06 06:52:35.945294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.343 [2024-12-06 06:52:35.945339] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:23.343 [2024-12-06 06:52:35.945362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.343 [2024-12-06 06:52:35.945490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:23.343 [2024-12-06 06:52:35.945511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:26:23.343 [2024-12-06 06:52:35.945531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.343 [2024-12-06 06:52:35.969004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.343 [2024-12-06 06:52:35.969117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:23.343 [2024-12-06 06:52:35.969167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.437 ms 00:26:23.343 [2024-12-06 06:52:35.969189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.343 [2024-12-06 06:52:35.969279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.343 [2024-12-06 06:52:35.969305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:23.343 [2024-12-06 06:52:35.969324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:26:23.343 [2024-12-06 06:52:35.969343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.343 [2024-12-06 06:52:35.970094] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:23.343 [2024-12-06 06:52:35.973138] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 284.183 ms, result 0 00:26:23.343 [2024-12-06 06:52:35.975063] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:23.343 [2024-12-06 06:52:35.987982] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:24.714  [2024-12-06T06:52:38.389Z] Copying: 17/256 [MB] (17 MBps) [2024-12-06T06:52:39.325Z] Copying: 32/256 [MB] (15 MBps) [2024-12-06T06:52:40.258Z] Copying: 52/256 [MB] (20 MBps) [2024-12-06T06:52:41.193Z] Copying: 70/256 [MB] (17 MBps) [2024-12-06T06:52:42.127Z] Copying: 85/256 [MB] (15 MBps) [2024-12-06T06:52:43.058Z] Copying: 104/256 [MB] (19 MBps) [2024-12-06T06:52:44.434Z] Copying: 117/256 [MB] (12 MBps) [2024-12-06T06:52:45.374Z] Copying: 129/256 [MB] (12 MBps) [2024-12-06T06:52:46.325Z] Copying: 140/256 [MB] (11 MBps) [2024-12-06T06:52:47.265Z] Copying: 156/256 [MB] (16 MBps) [2024-12-06T06:52:48.200Z] Copying: 169/256 [MB] (13 MBps) [2024-12-06T06:52:49.139Z] Copying: 184/256 [MB] (14 MBps) [2024-12-06T06:52:50.072Z] Copying: 199/256 [MB] (15 MBps) [2024-12-06T06:52:51.443Z] Copying: 214/256 [MB] (14 MBps) [2024-12-06T06:52:52.395Z] Copying: 229/256 [MB] (14 MBps) [2024-12-06T06:52:53.337Z] Copying: 243/256 [MB] (14 MBps) [2024-12-06T06:52:53.337Z] Copying: 255/256 [MB] (12 MBps) [2024-12-06T06:52:53.910Z] Copying: 256/256 [MB] (average 15 MBps)[2024-12-06 06:52:53.688267] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:41.169 [2024-12-06 06:52:53.699330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.169 [2024-12-06 06:52:53.699403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:41.169 [2024-12-06 06:52:53.699430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:41.169 [2024-12-06 06:52:53.699440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.169 [2024-12-06 06:52:53.699496] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:26:41.169 [2024-12-06 06:52:53.702482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.169 [2024-12-06 06:52:53.702529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:41.169 [2024-12-06 06:52:53.702541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.969 ms 00:26:41.169 [2024-12-06 06:52:53.702551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.169 [2024-12-06 06:52:53.702852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.169 [2024-12-06 06:52:53.702863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:41.169 [2024-12-06 06:52:53.702872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:26:41.169 [2024-12-06 06:52:53.702882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.169 [2024-12-06 06:52:53.707075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.169 [2024-12-06 06:52:53.707103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:41.169 [2024-12-06 06:52:53.707114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.172 ms 00:26:41.169 [2024-12-06 06:52:53.707123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.169 [2024-12-06 06:52:53.714671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.169 [2024-12-06 06:52:53.714721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:41.169 [2024-12-06 06:52:53.714732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.525 ms 00:26:41.169 [2024-12-06 06:52:53.714741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.169 [2024-12-06 06:52:53.742198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.169 [2024-12-06 06:52:53.742259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:41.169 [2024-12-06 06:52:53.742276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.377 ms 00:26:41.169 [2024-12-06 06:52:53.742285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.170 [2024-12-06 06:52:53.759056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.170 [2024-12-06 06:52:53.759114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:41.170 [2024-12-06 06:52:53.759137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.695 ms 00:26:41.170 [2024-12-06 06:52:53.759147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.170 [2024-12-06 06:52:53.759320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.170 [2024-12-06 06:52:53.759333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:41.170 [2024-12-06 06:52:53.759354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:26:41.170 [2024-12-06 06:52:53.759363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.170 [2024-12-06 06:52:53.785814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.170 [2024-12-06 06:52:53.785871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:41.170 [2024-12-06 06:52:53.785885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.433 ms 00:26:41.170 [2024-12-06 06:52:53.785892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.170 [2024-12-06 06:52:53.812303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.170 [2024-12-06 06:52:53.812354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:41.170 [2024-12-06 06:52:53.812367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.337 ms 00:26:41.170 [2024-12-06 06:52:53.812376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.170 [2024-12-06 06:52:53.838148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.170 [2024-12-06 06:52:53.838200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:41.170 [2024-12-06 06:52:53.838213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.703 ms 00:26:41.170 [2024-12-06 06:52:53.838222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.170 [2024-12-06 06:52:53.863612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.170 [2024-12-06 06:52:53.863661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:41.170 [2024-12-06 06:52:53.863675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.285 ms 00:26:41.170 [2024-12-06 06:52:53.863682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.170 [2024-12-06 06:52:53.863737] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:41.170 [2024-12-06 06:52:53.863757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:41.170 [2024-12-06 06:52:53.863769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:41.170 [2024-12-06 06:52:53.863778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:41.170 [2024-12-06 06:52:53.863787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:41.170 [2024-12-06 06:52:53.863796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:41.170 [2024-12-06 06:52:53.863805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:41.170 [2024-12-06 06:52:53.863813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:41.170 [2024-12-06 06:52:53.863822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:41.170 [2024-12-06 06:52:53.863830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:41.170 [2024-12-06 06:52:53.863838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:41.170 [2024-12-06 06:52:53.863847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:41.170 [2024-12-06 06:52:53.863857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:41.170 [2024-12-06 06:52:53.863865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:41.170 [2024-12-06 06:52:53.863873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:41.170 [2024-12-06 06:52:53.863880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:41.170 [2024-12-06 06:52:53.863888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:41.170 [2024-12-06 06:52:53.863895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:41.170 [2024-12-06 06:52:53.863903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:41.170 [2024-12-06 06:52:53.863910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:41.170 [2024-12-06 06:52:53.863918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:41.170 [2024-12-06 06:52:53.863926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:41.170 [2024-12-06 06:52:53.863934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:41.170 [2024-12-06 06:52:53.863941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:41.170 [2024-12-06 06:52:53.863949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:41.170 [2024-12-06 06:52:53.863957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:41.170 [2024-12-06 06:52:53.863965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:41.170 [2024-12-06 06:52:53.863973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:41.170 [2024-12-06 06:52:53.863980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:41.170 [2024-12-06 06:52:53.863989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:41.170 [2024-12-06 06:52:53.863997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:41.170 [2024-12-06 06:52:53.864008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:41.170 [2024-12-06 06:52:53.864016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:41.170 [2024-12-06 06:52:53.864024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:41.170 [2024-12-06 06:52:53.864032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:41.170 [2024-12-06 06:52:53.864039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:41.170 [2024-12-06 06:52:53.864047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:41.170 [2024-12-06 06:52:53.864054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:41.170 [2024-12-06 06:52:53.864062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:41.170 [2024-12-06 06:52:53.864069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:41.170 [2024-12-06 06:52:53.864077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:41.170 [2024-12-06 06:52:53.864085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:41.170 [2024-12-06 06:52:53.864093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:41.170 [2024-12-06 06:52:53.864101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:41.170 [2024-12-06 06:52:53.864109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:41.170 [2024-12-06 06:52:53.864117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:41.170 [2024-12-06 06:52:53.864125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:41.170 [2024-12-06 06:52:53.864133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:41.170 [2024-12-06 06:52:53.864140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:41.170 [2024-12-06 06:52:53.864148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:41.170 [2024-12-06 06:52:53.864158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:41.170 [2024-12-06 06:52:53.864174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:41.170 [2024-12-06 06:52:53.864181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:41.170 [2024-12-06 06:52:53.864190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:41.170 [2024-12-06 06:52:53.864198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:41.170 [2024-12-06 06:52:53.864206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:41.170 [2024-12-06 06:52:53.864214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:41.170 [2024-12-06 06:52:53.864221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:41.170 [2024-12-06 06:52:53.864229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:41.170 [2024-12-06 06:52:53.864238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:41.170 [2024-12-06 06:52:53.864246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:41.170 [2024-12-06 06:52:53.864254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:41.170 [2024-12-06 06:52:53.864263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:41.170 [2024-12-06 06:52:53.864271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:41.170 [2024-12-06 06:52:53.864280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:41.170 [2024-12-06 06:52:53.864288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:41.170 [2024-12-06 06:52:53.864296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:41.170 [2024-12-06 06:52:53.864304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:41.170 [2024-12-06 06:52:53.864312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:41.171 [2024-12-06 06:52:53.864320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:41.171 [2024-12-06 06:52:53.864328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:41.171 [2024-12-06 06:52:53.864336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:41.171 [2024-12-06 06:52:53.864344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:41.171 [2024-12-06 06:52:53.864352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:41.171 [2024-12-06 06:52:53.864361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:41.171 [2024-12-06 06:52:53.864370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:41.171 [2024-12-06 06:52:53.864378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:41.171 [2024-12-06 06:52:53.864387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:41.171 [2024-12-06 06:52:53.864395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:41.171 [2024-12-06 06:52:53.864404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:41.171 [2024-12-06 06:52:53.864413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:41.171 [2024-12-06 06:52:53.864420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:41.171 [2024-12-06 06:52:53.864428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:41.171 [2024-12-06 06:52:53.864436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:41.171 [2024-12-06 06:52:53.864443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:41.171 [2024-12-06 06:52:53.864450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:41.171 [2024-12-06 06:52:53.864458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:41.171 [2024-12-06 06:52:53.864480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:41.171 [2024-12-06 06:52:53.864488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:41.171 [2024-12-06 06:52:53.864495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:41.171 [2024-12-06 06:52:53.864504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:41.171 [2024-12-06 06:52:53.864513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:41.171 [2024-12-06 06:52:53.864521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:41.171 [2024-12-06 06:52:53.864529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:41.171 [2024-12-06 06:52:53.864548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:41.171 [2024-12-06 06:52:53.864557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:41.171 [2024-12-06 06:52:53.864566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:41.171 [2024-12-06 06:52:53.864574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:41.171 [2024-12-06 06:52:53.864582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:41.171 [2024-12-06 06:52:53.864590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:41.171 [2024-12-06 06:52:53.864598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:41.171 [2024-12-06 06:52:53.864616] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:41.171 [2024-12-06 06:52:53.864625] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5ee066fa-e7bc-4a33-b1a8-f35f9ed69a0f 00:26:41.171 [2024-12-06 06:52:53.864635] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:41.171 [2024-12-06 06:52:53.864643] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:41.171 [2024-12-06 06:52:53.864651] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:41.171 [2024-12-06 06:52:53.864660] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:41.171 [2024-12-06 06:52:53.864668] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:41.171 [2024-12-06 06:52:53.864677] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:41.171 [2024-12-06 06:52:53.864688] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:41.171 [2024-12-06 06:52:53.864694] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:41.171 [2024-12-06 06:52:53.864701] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:41.171 [2024-12-06 06:52:53.864708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.171 [2024-12-06 06:52:53.864717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:41.171 [2024-12-06 06:52:53.864726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.973 ms 00:26:41.171 [2024-12-06 06:52:53.864734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.171 [2024-12-06 06:52:53.878664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.171 [2024-12-06 06:52:53.878718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:41.171 [2024-12-06 06:52:53.878731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.894 ms 00:26:41.171 [2024-12-06 06:52:53.878742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.171 [2024-12-06 06:52:53.879169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.171 [2024-12-06 06:52:53.879187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:41.171 [2024-12-06 06:52:53.879197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.374 ms 00:26:41.171 [2024-12-06 06:52:53.879206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.431 [2024-12-06 06:52:53.918352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:41.431 [2024-12-06 06:52:53.918419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:41.431 [2024-12-06 06:52:53.918432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:41.431 [2024-12-06 06:52:53.918447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.431 [2024-12-06 06:52:53.918604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:41.431 [2024-12-06 06:52:53.918617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:41.431 [2024-12-06 06:52:53.918626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:41.431 [2024-12-06 06:52:53.918635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.431 [2024-12-06 06:52:53.918693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:41.431 [2024-12-06 06:52:53.918704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:41.431 [2024-12-06 06:52:53.918712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:41.432 [2024-12-06 06:52:53.918720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.432 [2024-12-06 06:52:53.918743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:41.432 [2024-12-06 06:52:53.918752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:41.432 [2024-12-06 06:52:53.918760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:41.432 [2024-12-06 06:52:53.918768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.432 [2024-12-06 06:52:54.004481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:41.432 [2024-12-06 06:52:54.004573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:41.432 [2024-12-06 06:52:54.004588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:41.432 [2024-12-06 06:52:54.004596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.432 [2024-12-06 06:52:54.074764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:41.432 [2024-12-06 06:52:54.074840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:41.432 [2024-12-06 06:52:54.074854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:41.432 [2024-12-06 06:52:54.074864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.432 [2024-12-06 06:52:54.074957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:41.432 [2024-12-06 06:52:54.074969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:41.432 [2024-12-06 06:52:54.074978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:41.432 [2024-12-06 06:52:54.074987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.432 [2024-12-06 06:52:54.075022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:41.432 [2024-12-06 06:52:54.075040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:41.432 [2024-12-06 06:52:54.075049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:41.432 [2024-12-06 06:52:54.075057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.432 [2024-12-06 06:52:54.075165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:41.432 [2024-12-06 06:52:54.075176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:41.432 [2024-12-06 06:52:54.075184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:41.432 [2024-12-06 06:52:54.075193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.432 [2024-12-06 06:52:54.075229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:41.432 [2024-12-06 06:52:54.075239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:41.432 [2024-12-06 06:52:54.075251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:41.432 [2024-12-06 06:52:54.075260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.432 [2024-12-06 06:52:54.075305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:41.432 [2024-12-06 06:52:54.075315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:41.432 [2024-12-06 06:52:54.075324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:41.432 [2024-12-06 06:52:54.075332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.432 [2024-12-06 06:52:54.075381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:41.432 [2024-12-06 06:52:54.075408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:41.432 [2024-12-06 06:52:54.075417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:41.432 [2024-12-06 06:52:54.075425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.432 [2024-12-06 06:52:54.075612] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 376.282 ms, result 0 00:26:42.373 00:26:42.373 00:26:42.373 06:52:54 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:42.941 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:26:42.941 06:52:55 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:26:42.941 06:52:55 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:26:42.941 06:52:55 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:42.941 06:52:55 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:42.941 06:52:55 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:26:42.941 06:52:55 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:26:42.941 06:52:55 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 77248 00:26:42.941 06:52:55 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 77248 ']' 00:26:42.942 06:52:55 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 77248 00:26:42.942 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (77248) - No such process 00:26:42.942 Process with pid 77248 is not found 00:26:42.942 06:52:55 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 77248 is not found' 00:26:42.942 00:26:42.942 real 1m16.454s 00:26:42.942 user 1m32.084s 00:26:42.942 sys 0m17.432s 00:26:42.942 06:52:55 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:42.942 ************************************ 00:26:42.942 END TEST ftl_trim 00:26:42.942 ************************************ 00:26:42.942 06:52:55 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:26:42.942 06:52:55 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:26:42.942 06:52:55 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:26:42.942 06:52:55 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:42.942 06:52:55 ftl -- common/autotest_common.sh@10 -- # set +x 00:26:42.942 ************************************ 00:26:42.942 START TEST ftl_restore 00:26:42.942 ************************************ 00:26:42.942 06:52:55 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:26:42.942 * Looking for test storage... 00:26:42.942 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:42.942 06:52:55 ftl.ftl_restore -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:43.202 06:52:55 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lcov --version 00:26:43.202 06:52:55 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:43.202 06:52:55 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:43.202 06:52:55 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:43.202 06:52:55 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:43.202 06:52:55 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:43.202 06:52:55 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:26:43.202 06:52:55 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:26:43.202 06:52:55 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:26:43.202 06:52:55 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:26:43.202 06:52:55 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:26:43.202 06:52:55 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:26:43.202 06:52:55 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:26:43.202 06:52:55 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:43.202 06:52:55 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:26:43.202 06:52:55 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:26:43.202 06:52:55 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:43.202 06:52:55 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:43.202 06:52:55 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:26:43.202 06:52:55 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:26:43.202 06:52:55 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:43.202 06:52:55 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:26:43.202 06:52:55 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:26:43.202 06:52:55 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:26:43.202 06:52:55 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:26:43.202 06:52:55 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:43.202 06:52:55 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:26:43.202 06:52:55 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:26:43.202 06:52:55 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:43.202 06:52:55 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:43.202 06:52:55 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:26:43.202 06:52:55 ftl.ftl_restore -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:43.202 06:52:55 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:43.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:43.203 --rc genhtml_branch_coverage=1 00:26:43.203 --rc genhtml_function_coverage=1 00:26:43.203 --rc genhtml_legend=1 00:26:43.203 --rc geninfo_all_blocks=1 00:26:43.203 --rc geninfo_unexecuted_blocks=1 00:26:43.203 00:26:43.203 ' 00:26:43.203 06:52:55 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:43.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:43.203 --rc genhtml_branch_coverage=1 00:26:43.203 --rc genhtml_function_coverage=1 00:26:43.203 --rc genhtml_legend=1 00:26:43.203 --rc geninfo_all_blocks=1 00:26:43.203 --rc geninfo_unexecuted_blocks=1 00:26:43.203 00:26:43.203 ' 00:26:43.203 06:52:55 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:43.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:43.203 --rc genhtml_branch_coverage=1 00:26:43.203 --rc genhtml_function_coverage=1 00:26:43.203 --rc genhtml_legend=1 00:26:43.203 --rc geninfo_all_blocks=1 00:26:43.203 --rc geninfo_unexecuted_blocks=1 00:26:43.203 00:26:43.203 ' 00:26:43.203 06:52:55 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:43.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:43.203 --rc genhtml_branch_coverage=1 00:26:43.203 --rc genhtml_function_coverage=1 00:26:43.203 --rc genhtml_legend=1 00:26:43.203 --rc geninfo_all_blocks=1 00:26:43.203 --rc geninfo_unexecuted_blocks=1 00:26:43.203 00:26:43.203 ' 00:26:43.203 06:52:55 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:43.203 06:52:55 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:26:43.203 06:52:55 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:43.203 06:52:55 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:43.203 06:52:55 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:43.203 06:52:55 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:43.203 06:52:55 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:43.203 06:52:55 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:43.203 06:52:55 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:43.203 06:52:55 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:43.203 06:52:55 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:43.203 06:52:55 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:43.203 06:52:55 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:43.203 06:52:55 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:43.203 06:52:55 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:43.203 06:52:55 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:43.203 06:52:55 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:43.203 06:52:55 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:43.203 06:52:55 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:43.203 06:52:55 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:43.203 06:52:55 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:43.203 06:52:55 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:43.203 06:52:55 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:43.203 06:52:55 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:43.203 06:52:55 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:43.203 06:52:55 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:43.203 06:52:55 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:43.203 06:52:55 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:43.203 06:52:55 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:43.203 06:52:55 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:43.203 06:52:55 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:26:43.203 06:52:55 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.IPVQ7HNQS0 00:26:43.203 06:52:55 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:26:43.203 06:52:55 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:26:43.203 06:52:55 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:26:43.203 06:52:55 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:26:43.203 06:52:55 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:26:43.203 06:52:55 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:26:43.203 06:52:55 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:26:43.203 06:52:55 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:26:43.203 06:52:55 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=77580 00:26:43.203 06:52:55 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 77580 00:26:43.203 06:52:55 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 77580 ']' 00:26:43.203 06:52:55 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:43.203 06:52:55 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:43.203 06:52:55 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:43.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:43.203 06:52:55 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:43.203 06:52:55 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:43.203 06:52:55 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:26:43.203 [2024-12-06 06:52:55.872771] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:26:43.203 [2024-12-06 06:52:55.873349] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77580 ] 00:26:43.465 [2024-12-06 06:52:56.040733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:43.465 [2024-12-06 06:52:56.179841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:44.406 06:52:56 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:44.406 06:52:56 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:26:44.406 06:52:56 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:26:44.406 06:52:56 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:26:44.406 06:52:56 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:26:44.406 06:52:56 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:26:44.406 06:52:56 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:26:44.406 06:52:56 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:26:44.664 06:52:57 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:26:44.664 06:52:57 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:26:44.664 06:52:57 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:26:44.664 06:52:57 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:26:44.664 06:52:57 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:44.664 06:52:57 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:26:44.664 06:52:57 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:26:44.664 06:52:57 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:26:45.037 06:52:57 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:45.037 { 00:26:45.037 "name": "nvme0n1", 00:26:45.037 "aliases": [ 00:26:45.037 "4c61b24a-6695-4f63-89ae-0464d4f3114e" 00:26:45.037 ], 00:26:45.037 "product_name": "NVMe disk", 00:26:45.037 "block_size": 4096, 00:26:45.037 "num_blocks": 1310720, 00:26:45.037 "uuid": "4c61b24a-6695-4f63-89ae-0464d4f3114e", 00:26:45.037 "numa_id": -1, 00:26:45.037 "assigned_rate_limits": { 00:26:45.037 "rw_ios_per_sec": 0, 00:26:45.037 "rw_mbytes_per_sec": 0, 00:26:45.037 "r_mbytes_per_sec": 0, 00:26:45.037 "w_mbytes_per_sec": 0 00:26:45.037 }, 00:26:45.037 "claimed": true, 00:26:45.037 "claim_type": "read_many_write_one", 00:26:45.037 "zoned": false, 00:26:45.037 "supported_io_types": { 00:26:45.037 "read": true, 00:26:45.037 "write": true, 00:26:45.037 "unmap": true, 00:26:45.037 "flush": true, 00:26:45.037 "reset": true, 00:26:45.037 "nvme_admin": true, 00:26:45.037 "nvme_io": true, 00:26:45.037 "nvme_io_md": false, 00:26:45.037 "write_zeroes": true, 00:26:45.037 "zcopy": false, 00:26:45.037 "get_zone_info": false, 00:26:45.037 "zone_management": false, 00:26:45.037 "zone_append": false, 00:26:45.037 "compare": true, 00:26:45.037 "compare_and_write": false, 00:26:45.037 "abort": true, 00:26:45.037 "seek_hole": false, 00:26:45.037 "seek_data": false, 00:26:45.037 "copy": true, 00:26:45.037 "nvme_iov_md": false 00:26:45.037 }, 00:26:45.037 "driver_specific": { 00:26:45.037 "nvme": [ 00:26:45.037 { 00:26:45.037 "pci_address": "0000:00:11.0", 00:26:45.037 "trid": { 00:26:45.037 "trtype": "PCIe", 00:26:45.037 "traddr": "0000:00:11.0" 00:26:45.037 }, 00:26:45.037 "ctrlr_data": { 00:26:45.037 "cntlid": 0, 00:26:45.037 "vendor_id": "0x1b36", 00:26:45.037 "model_number": "QEMU NVMe Ctrl", 00:26:45.037 "serial_number": "12341", 00:26:45.037 "firmware_revision": "8.0.0", 00:26:45.037 "subnqn": "nqn.2019-08.org.qemu:12341", 00:26:45.037 "oacs": { 00:26:45.037 "security": 0, 00:26:45.037 "format": 1, 00:26:45.037 "firmware": 0, 00:26:45.037 "ns_manage": 1 00:26:45.037 }, 00:26:45.037 "multi_ctrlr": false, 00:26:45.037 "ana_reporting": false 00:26:45.037 }, 00:26:45.037 "vs": { 00:26:45.037 "nvme_version": "1.4" 00:26:45.037 }, 00:26:45.037 "ns_data": { 00:26:45.037 "id": 1, 00:26:45.037 "can_share": false 00:26:45.037 } 00:26:45.037 } 00:26:45.037 ], 00:26:45.037 "mp_policy": "active_passive" 00:26:45.037 } 00:26:45.037 } 00:26:45.037 ]' 00:26:45.037 06:52:57 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:45.037 06:52:57 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:26:45.037 06:52:57 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:45.037 06:52:57 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:26:45.037 06:52:57 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:26:45.037 06:52:57 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:26:45.037 06:52:57 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:26:45.037 06:52:57 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:26:45.037 06:52:57 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:26:45.037 06:52:57 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:45.037 06:52:57 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:45.295 06:52:57 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=0837f94a-1e86-49e8-8289-a00fc58d04db 00:26:45.295 06:52:57 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:26:45.295 06:52:57 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0837f94a-1e86-49e8-8289-a00fc58d04db 00:26:45.295 06:52:57 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:26:45.552 06:52:58 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=0ee3844a-203b-4e4a-a874-b0f424608289 00:26:45.552 06:52:58 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 0ee3844a-203b-4e4a-a874-b0f424608289 00:26:45.811 06:52:58 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=c5204821-f80e-43fb-80de-dfa439e7bed3 00:26:45.811 06:52:58 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:26:45.811 06:52:58 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 c5204821-f80e-43fb-80de-dfa439e7bed3 00:26:45.811 06:52:58 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:26:45.811 06:52:58 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:26:45.811 06:52:58 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=c5204821-f80e-43fb-80de-dfa439e7bed3 00:26:45.811 06:52:58 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:26:45.811 06:52:58 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size c5204821-f80e-43fb-80de-dfa439e7bed3 00:26:45.811 06:52:58 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=c5204821-f80e-43fb-80de-dfa439e7bed3 00:26:45.811 06:52:58 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:45.811 06:52:58 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:26:45.811 06:52:58 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:26:45.811 06:52:58 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c5204821-f80e-43fb-80de-dfa439e7bed3 00:26:46.069 06:52:58 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:46.069 { 00:26:46.069 "name": "c5204821-f80e-43fb-80de-dfa439e7bed3", 00:26:46.069 "aliases": [ 00:26:46.069 "lvs/nvme0n1p0" 00:26:46.069 ], 00:26:46.069 "product_name": "Logical Volume", 00:26:46.069 "block_size": 4096, 00:26:46.069 "num_blocks": 26476544, 00:26:46.069 "uuid": "c5204821-f80e-43fb-80de-dfa439e7bed3", 00:26:46.069 "assigned_rate_limits": { 00:26:46.069 "rw_ios_per_sec": 0, 00:26:46.069 "rw_mbytes_per_sec": 0, 00:26:46.069 "r_mbytes_per_sec": 0, 00:26:46.069 "w_mbytes_per_sec": 0 00:26:46.069 }, 00:26:46.069 "claimed": false, 00:26:46.069 "zoned": false, 00:26:46.069 "supported_io_types": { 00:26:46.069 "read": true, 00:26:46.069 "write": true, 00:26:46.069 "unmap": true, 00:26:46.069 "flush": false, 00:26:46.069 "reset": true, 00:26:46.069 "nvme_admin": false, 00:26:46.069 "nvme_io": false, 00:26:46.069 "nvme_io_md": false, 00:26:46.069 "write_zeroes": true, 00:26:46.069 "zcopy": false, 00:26:46.069 "get_zone_info": false, 00:26:46.069 "zone_management": false, 00:26:46.069 "zone_append": false, 00:26:46.069 "compare": false, 00:26:46.069 "compare_and_write": false, 00:26:46.069 "abort": false, 00:26:46.069 "seek_hole": true, 00:26:46.069 "seek_data": true, 00:26:46.069 "copy": false, 00:26:46.069 "nvme_iov_md": false 00:26:46.069 }, 00:26:46.069 "driver_specific": { 00:26:46.069 "lvol": { 00:26:46.069 "lvol_store_uuid": "0ee3844a-203b-4e4a-a874-b0f424608289", 00:26:46.069 "base_bdev": "nvme0n1", 00:26:46.069 "thin_provision": true, 00:26:46.069 "num_allocated_clusters": 0, 00:26:46.069 "snapshot": false, 00:26:46.069 "clone": false, 00:26:46.069 "esnap_clone": false 00:26:46.069 } 00:26:46.069 } 00:26:46.069 } 00:26:46.069 ]' 00:26:46.069 06:52:58 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:46.069 06:52:58 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:26:46.069 06:52:58 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:46.069 06:52:58 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:46.069 06:52:58 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:46.069 06:52:58 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:26:46.069 06:52:58 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:26:46.069 06:52:58 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:26:46.069 06:52:58 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:26:46.328 06:52:58 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:26:46.328 06:52:58 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:26:46.328 06:52:58 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size c5204821-f80e-43fb-80de-dfa439e7bed3 00:26:46.328 06:52:58 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=c5204821-f80e-43fb-80de-dfa439e7bed3 00:26:46.328 06:52:58 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:46.328 06:52:58 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:26:46.328 06:52:58 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:26:46.328 06:52:58 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c5204821-f80e-43fb-80de-dfa439e7bed3 00:26:46.586 06:52:59 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:46.586 { 00:26:46.586 "name": "c5204821-f80e-43fb-80de-dfa439e7bed3", 00:26:46.586 "aliases": [ 00:26:46.586 "lvs/nvme0n1p0" 00:26:46.586 ], 00:26:46.586 "product_name": "Logical Volume", 00:26:46.586 "block_size": 4096, 00:26:46.586 "num_blocks": 26476544, 00:26:46.586 "uuid": "c5204821-f80e-43fb-80de-dfa439e7bed3", 00:26:46.586 "assigned_rate_limits": { 00:26:46.586 "rw_ios_per_sec": 0, 00:26:46.586 "rw_mbytes_per_sec": 0, 00:26:46.586 "r_mbytes_per_sec": 0, 00:26:46.586 "w_mbytes_per_sec": 0 00:26:46.586 }, 00:26:46.586 "claimed": false, 00:26:46.586 "zoned": false, 00:26:46.586 "supported_io_types": { 00:26:46.586 "read": true, 00:26:46.586 "write": true, 00:26:46.586 "unmap": true, 00:26:46.586 "flush": false, 00:26:46.586 "reset": true, 00:26:46.586 "nvme_admin": false, 00:26:46.586 "nvme_io": false, 00:26:46.586 "nvme_io_md": false, 00:26:46.586 "write_zeroes": true, 00:26:46.586 "zcopy": false, 00:26:46.586 "get_zone_info": false, 00:26:46.586 "zone_management": false, 00:26:46.586 "zone_append": false, 00:26:46.586 "compare": false, 00:26:46.586 "compare_and_write": false, 00:26:46.586 "abort": false, 00:26:46.586 "seek_hole": true, 00:26:46.586 "seek_data": true, 00:26:46.586 "copy": false, 00:26:46.586 "nvme_iov_md": false 00:26:46.586 }, 00:26:46.586 "driver_specific": { 00:26:46.586 "lvol": { 00:26:46.586 "lvol_store_uuid": "0ee3844a-203b-4e4a-a874-b0f424608289", 00:26:46.586 "base_bdev": "nvme0n1", 00:26:46.586 "thin_provision": true, 00:26:46.586 "num_allocated_clusters": 0, 00:26:46.586 "snapshot": false, 00:26:46.586 "clone": false, 00:26:46.586 "esnap_clone": false 00:26:46.586 } 00:26:46.586 } 00:26:46.586 } 00:26:46.586 ]' 00:26:46.586 06:52:59 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:46.586 06:52:59 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:26:46.586 06:52:59 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:46.586 06:52:59 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:46.586 06:52:59 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:46.586 06:52:59 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:26:46.586 06:52:59 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:26:46.586 06:52:59 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:26:46.845 06:52:59 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:26:46.845 06:52:59 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size c5204821-f80e-43fb-80de-dfa439e7bed3 00:26:46.845 06:52:59 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=c5204821-f80e-43fb-80de-dfa439e7bed3 00:26:46.845 06:52:59 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:46.845 06:52:59 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:26:46.845 06:52:59 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:26:46.845 06:52:59 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c5204821-f80e-43fb-80de-dfa439e7bed3 00:26:47.104 06:52:59 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:47.104 { 00:26:47.104 "name": "c5204821-f80e-43fb-80de-dfa439e7bed3", 00:26:47.104 "aliases": [ 00:26:47.104 "lvs/nvme0n1p0" 00:26:47.104 ], 00:26:47.104 "product_name": "Logical Volume", 00:26:47.104 "block_size": 4096, 00:26:47.104 "num_blocks": 26476544, 00:26:47.104 "uuid": "c5204821-f80e-43fb-80de-dfa439e7bed3", 00:26:47.104 "assigned_rate_limits": { 00:26:47.104 "rw_ios_per_sec": 0, 00:26:47.104 "rw_mbytes_per_sec": 0, 00:26:47.104 "r_mbytes_per_sec": 0, 00:26:47.104 "w_mbytes_per_sec": 0 00:26:47.104 }, 00:26:47.104 "claimed": false, 00:26:47.104 "zoned": false, 00:26:47.104 "supported_io_types": { 00:26:47.104 "read": true, 00:26:47.104 "write": true, 00:26:47.104 "unmap": true, 00:26:47.104 "flush": false, 00:26:47.104 "reset": true, 00:26:47.104 "nvme_admin": false, 00:26:47.104 "nvme_io": false, 00:26:47.104 "nvme_io_md": false, 00:26:47.104 "write_zeroes": true, 00:26:47.104 "zcopy": false, 00:26:47.104 "get_zone_info": false, 00:26:47.104 "zone_management": false, 00:26:47.104 "zone_append": false, 00:26:47.104 "compare": false, 00:26:47.104 "compare_and_write": false, 00:26:47.104 "abort": false, 00:26:47.104 "seek_hole": true, 00:26:47.104 "seek_data": true, 00:26:47.104 "copy": false, 00:26:47.104 "nvme_iov_md": false 00:26:47.104 }, 00:26:47.104 "driver_specific": { 00:26:47.104 "lvol": { 00:26:47.104 "lvol_store_uuid": "0ee3844a-203b-4e4a-a874-b0f424608289", 00:26:47.104 "base_bdev": "nvme0n1", 00:26:47.104 "thin_provision": true, 00:26:47.104 "num_allocated_clusters": 0, 00:26:47.104 "snapshot": false, 00:26:47.104 "clone": false, 00:26:47.104 "esnap_clone": false 00:26:47.104 } 00:26:47.104 } 00:26:47.104 } 00:26:47.104 ]' 00:26:47.104 06:52:59 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:47.104 06:52:59 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:26:47.104 06:52:59 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:47.104 06:52:59 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:47.104 06:52:59 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:47.104 06:52:59 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:26:47.104 06:52:59 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:26:47.104 06:52:59 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d c5204821-f80e-43fb-80de-dfa439e7bed3 --l2p_dram_limit 10' 00:26:47.104 06:52:59 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:26:47.104 06:52:59 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:26:47.104 06:52:59 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:26:47.104 06:52:59 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:26:47.104 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:26:47.104 06:52:59 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d c5204821-f80e-43fb-80de-dfa439e7bed3 --l2p_dram_limit 10 -c nvc0n1p0 00:26:47.364 [2024-12-06 06:52:59.852448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.364 [2024-12-06 06:52:59.852509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:47.364 [2024-12-06 06:52:59.852524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:47.364 [2024-12-06 06:52:59.852533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.364 [2024-12-06 06:52:59.852592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.364 [2024-12-06 06:52:59.852602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:47.364 [2024-12-06 06:52:59.852612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:26:47.364 [2024-12-06 06:52:59.852619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.364 [2024-12-06 06:52:59.852644] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:47.364 [2024-12-06 06:52:59.853424] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:47.364 [2024-12-06 06:52:59.853454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.364 [2024-12-06 06:52:59.853479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:47.364 [2024-12-06 06:52:59.853490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.816 ms 00:26:47.364 [2024-12-06 06:52:59.853498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.364 [2024-12-06 06:52:59.853569] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID d910f68e-4869-4d1c-9d11-c1e41849a5be 00:26:47.364 [2024-12-06 06:52:59.854623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.364 [2024-12-06 06:52:59.854655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:26:47.364 [2024-12-06 06:52:59.854664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:26:47.364 [2024-12-06 06:52:59.854673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.364 [2024-12-06 06:52:59.859933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.364 [2024-12-06 06:52:59.860066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:47.364 [2024-12-06 06:52:59.860082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.212 ms 00:26:47.364 [2024-12-06 06:52:59.860091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.364 [2024-12-06 06:52:59.860174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.364 [2024-12-06 06:52:59.860185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:47.364 [2024-12-06 06:52:59.860193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:26:47.364 [2024-12-06 06:52:59.860211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.364 [2024-12-06 06:52:59.860253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.364 [2024-12-06 06:52:59.860264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:47.364 [2024-12-06 06:52:59.860275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:47.364 [2024-12-06 06:52:59.860284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.364 [2024-12-06 06:52:59.860305] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:47.364 [2024-12-06 06:52:59.863878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.364 [2024-12-06 06:52:59.863905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:47.364 [2024-12-06 06:52:59.863918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.577 ms 00:26:47.364 [2024-12-06 06:52:59.863926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.364 [2024-12-06 06:52:59.863959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.364 [2024-12-06 06:52:59.863967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:47.364 [2024-12-06 06:52:59.863977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:47.364 [2024-12-06 06:52:59.863984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.364 [2024-12-06 06:52:59.864002] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:26:47.364 [2024-12-06 06:52:59.864140] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:47.364 [2024-12-06 06:52:59.864159] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:47.364 [2024-12-06 06:52:59.864169] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:47.364 [2024-12-06 06:52:59.864181] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:47.364 [2024-12-06 06:52:59.864190] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:47.364 [2024-12-06 06:52:59.864200] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:47.364 [2024-12-06 06:52:59.864207] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:47.364 [2024-12-06 06:52:59.864219] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:47.364 [2024-12-06 06:52:59.864227] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:47.364 [2024-12-06 06:52:59.864237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.364 [2024-12-06 06:52:59.864249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:47.365 [2024-12-06 06:52:59.864259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.235 ms 00:26:47.365 [2024-12-06 06:52:59.864266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.365 [2024-12-06 06:52:59.864351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.365 [2024-12-06 06:52:59.864359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:47.365 [2024-12-06 06:52:59.864368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:26:47.365 [2024-12-06 06:52:59.864375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.365 [2024-12-06 06:52:59.864500] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:47.365 [2024-12-06 06:52:59.864512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:47.365 [2024-12-06 06:52:59.864522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:47.365 [2024-12-06 06:52:59.864530] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:47.365 [2024-12-06 06:52:59.864539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:47.365 [2024-12-06 06:52:59.864546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:47.365 [2024-12-06 06:52:59.864554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:47.365 [2024-12-06 06:52:59.864561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:47.365 [2024-12-06 06:52:59.864569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:47.365 [2024-12-06 06:52:59.864575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:47.365 [2024-12-06 06:52:59.864585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:47.365 [2024-12-06 06:52:59.864591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:47.365 [2024-12-06 06:52:59.864599] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:47.365 [2024-12-06 06:52:59.864606] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:47.365 [2024-12-06 06:52:59.864614] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:47.365 [2024-12-06 06:52:59.864621] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:47.365 [2024-12-06 06:52:59.864630] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:47.365 [2024-12-06 06:52:59.864637] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:47.365 [2024-12-06 06:52:59.864644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:47.365 [2024-12-06 06:52:59.864651] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:47.365 [2024-12-06 06:52:59.864659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:47.365 [2024-12-06 06:52:59.864665] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:47.365 [2024-12-06 06:52:59.864673] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:47.365 [2024-12-06 06:52:59.864680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:47.365 [2024-12-06 06:52:59.864688] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:47.365 [2024-12-06 06:52:59.864696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:47.365 [2024-12-06 06:52:59.864704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:47.365 [2024-12-06 06:52:59.864711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:47.365 [2024-12-06 06:52:59.864719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:47.365 [2024-12-06 06:52:59.864726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:47.365 [2024-12-06 06:52:59.864734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:47.365 [2024-12-06 06:52:59.864740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:47.365 [2024-12-06 06:52:59.864750] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:47.365 [2024-12-06 06:52:59.864756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:47.365 [2024-12-06 06:52:59.864764] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:47.365 [2024-12-06 06:52:59.864771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:47.365 [2024-12-06 06:52:59.864780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:47.365 [2024-12-06 06:52:59.864786] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:47.365 [2024-12-06 06:52:59.864794] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:47.365 [2024-12-06 06:52:59.864801] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:47.365 [2024-12-06 06:52:59.864809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:47.365 [2024-12-06 06:52:59.864815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:47.365 [2024-12-06 06:52:59.864823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:47.365 [2024-12-06 06:52:59.864829] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:47.365 [2024-12-06 06:52:59.864838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:47.365 [2024-12-06 06:52:59.864846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:47.365 [2024-12-06 06:52:59.864854] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:47.365 [2024-12-06 06:52:59.864861] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:47.365 [2024-12-06 06:52:59.864870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:47.365 [2024-12-06 06:52:59.864877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:47.365 [2024-12-06 06:52:59.864885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:47.365 [2024-12-06 06:52:59.864892] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:47.365 [2024-12-06 06:52:59.864899] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:47.365 [2024-12-06 06:52:59.864908] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:47.365 [2024-12-06 06:52:59.864920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:47.365 [2024-12-06 06:52:59.864929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:47.365 [2024-12-06 06:52:59.864937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:47.365 [2024-12-06 06:52:59.864945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:47.365 [2024-12-06 06:52:59.864954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:47.365 [2024-12-06 06:52:59.864961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:47.365 [2024-12-06 06:52:59.864970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:47.365 [2024-12-06 06:52:59.864976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:47.365 [2024-12-06 06:52:59.864986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:47.365 [2024-12-06 06:52:59.864994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:47.365 [2024-12-06 06:52:59.865004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:47.365 [2024-12-06 06:52:59.865011] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:47.365 [2024-12-06 06:52:59.865019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:47.365 [2024-12-06 06:52:59.865026] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:47.365 [2024-12-06 06:52:59.865034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:47.365 [2024-12-06 06:52:59.865041] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:47.365 [2024-12-06 06:52:59.865050] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:47.365 [2024-12-06 06:52:59.865058] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:47.365 [2024-12-06 06:52:59.865067] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:47.365 [2024-12-06 06:52:59.865074] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:47.365 [2024-12-06 06:52:59.865082] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:47.365 [2024-12-06 06:52:59.865089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.365 [2024-12-06 06:52:59.865098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:47.365 [2024-12-06 06:52:59.865105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.671 ms 00:26:47.365 [2024-12-06 06:52:59.865114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.365 [2024-12-06 06:52:59.865155] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:26:47.365 [2024-12-06 06:52:59.865168] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:26:52.675 [2024-12-06 06:53:04.934806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.675 [2024-12-06 06:53:04.934866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:26:52.675 [2024-12-06 06:53:04.934881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5069.633 ms 00:26:52.675 [2024-12-06 06:53:04.934892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.675 [2024-12-06 06:53:04.960686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.675 [2024-12-06 06:53:04.960733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:52.675 [2024-12-06 06:53:04.960745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.592 ms 00:26:52.675 [2024-12-06 06:53:04.960754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.675 [2024-12-06 06:53:04.960871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.675 [2024-12-06 06:53:04.960882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:52.675 [2024-12-06 06:53:04.960891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:26:52.675 [2024-12-06 06:53:04.960905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.675 [2024-12-06 06:53:04.991373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.675 [2024-12-06 06:53:04.991422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:52.675 [2024-12-06 06:53:04.991433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.435 ms 00:26:52.675 [2024-12-06 06:53:04.991443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.675 [2024-12-06 06:53:04.991483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.675 [2024-12-06 06:53:04.991497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:52.675 [2024-12-06 06:53:04.991505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:52.675 [2024-12-06 06:53:04.991521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.675 [2024-12-06 06:53:04.991873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.675 [2024-12-06 06:53:04.991896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:52.675 [2024-12-06 06:53:04.991905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:26:52.675 [2024-12-06 06:53:04.991914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.675 [2024-12-06 06:53:04.992014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.675 [2024-12-06 06:53:04.992025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:52.675 [2024-12-06 06:53:04.992035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:26:52.675 [2024-12-06 06:53:04.992046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.675 [2024-12-06 06:53:05.005985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.675 [2024-12-06 06:53:05.006133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:52.675 [2024-12-06 06:53:05.006150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.923 ms 00:26:52.675 [2024-12-06 06:53:05.006160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.675 [2024-12-06 06:53:05.031068] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:52.675 [2024-12-06 06:53:05.034085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.675 [2024-12-06 06:53:05.034116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:52.675 [2024-12-06 06:53:05.034130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.850 ms 00:26:52.675 [2024-12-06 06:53:05.034138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.675 [2024-12-06 06:53:05.242417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.675 [2024-12-06 06:53:05.242625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:26:52.675 [2024-12-06 06:53:05.242650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 208.240 ms 00:26:52.675 [2024-12-06 06:53:05.242660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.675 [2024-12-06 06:53:05.242829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.675 [2024-12-06 06:53:05.242841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:52.675 [2024-12-06 06:53:05.242854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:26:52.675 [2024-12-06 06:53:05.242862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.675 [2024-12-06 06:53:05.266396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.675 [2024-12-06 06:53:05.266548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:26:52.675 [2024-12-06 06:53:05.266569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.487 ms 00:26:52.675 [2024-12-06 06:53:05.266578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.675 [2024-12-06 06:53:05.288744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.675 [2024-12-06 06:53:05.288774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:26:52.675 [2024-12-06 06:53:05.288788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.136 ms 00:26:52.675 [2024-12-06 06:53:05.288795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.675 [2024-12-06 06:53:05.289338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.675 [2024-12-06 06:53:05.289351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:52.675 [2024-12-06 06:53:05.289362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.522 ms 00:26:52.675 [2024-12-06 06:53:05.289371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.933 [2024-12-06 06:53:05.418256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.933 [2024-12-06 06:53:05.418424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:26:52.933 [2024-12-06 06:53:05.418450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 128.850 ms 00:26:52.933 [2024-12-06 06:53:05.418458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.933 [2024-12-06 06:53:05.443299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.933 [2024-12-06 06:53:05.443355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:26:52.933 [2024-12-06 06:53:05.443371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.756 ms 00:26:52.933 [2024-12-06 06:53:05.443378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.933 [2024-12-06 06:53:05.467366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.933 [2024-12-06 06:53:05.467415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:26:52.933 [2024-12-06 06:53:05.467429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.934 ms 00:26:52.933 [2024-12-06 06:53:05.467436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.933 [2024-12-06 06:53:05.492217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.933 [2024-12-06 06:53:05.492254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:52.933 [2024-12-06 06:53:05.492268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.730 ms 00:26:52.933 [2024-12-06 06:53:05.492277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.933 [2024-12-06 06:53:05.492316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.933 [2024-12-06 06:53:05.492326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:52.933 [2024-12-06 06:53:05.492339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:52.933 [2024-12-06 06:53:05.492346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.933 [2024-12-06 06:53:05.492422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.933 [2024-12-06 06:53:05.492433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:52.933 [2024-12-06 06:53:05.492443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:26:52.933 [2024-12-06 06:53:05.492450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.933 [2024-12-06 06:53:05.493303] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 5640.447 ms, result 0 00:26:52.933 { 00:26:52.933 "name": "ftl0", 00:26:52.933 "uuid": "d910f68e-4869-4d1c-9d11-c1e41849a5be" 00:26:52.933 } 00:26:52.933 06:53:05 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:26:52.933 06:53:05 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:26:53.191 06:53:05 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:26:53.191 06:53:05 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:26:53.450 [2024-12-06 06:53:05.968999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.450 [2024-12-06 06:53:05.969058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:53.450 [2024-12-06 06:53:05.969072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:53.450 [2024-12-06 06:53:05.969082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.450 [2024-12-06 06:53:05.969106] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:53.450 [2024-12-06 06:53:05.971750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.450 [2024-12-06 06:53:05.971885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:53.450 [2024-12-06 06:53:05.971905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.626 ms 00:26:53.450 [2024-12-06 06:53:05.971913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.450 [2024-12-06 06:53:05.972172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.450 [2024-12-06 06:53:05.972185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:53.450 [2024-12-06 06:53:05.972194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.229 ms 00:26:53.450 [2024-12-06 06:53:05.972202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.450 [2024-12-06 06:53:05.975432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.450 [2024-12-06 06:53:05.975538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:53.450 [2024-12-06 06:53:05.975554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.214 ms 00:26:53.450 [2024-12-06 06:53:05.975562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.450 [2024-12-06 06:53:05.981710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.450 [2024-12-06 06:53:05.981733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:53.450 [2024-12-06 06:53:05.981747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.125 ms 00:26:53.450 [2024-12-06 06:53:05.981754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.450 [2024-12-06 06:53:06.005455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.450 [2024-12-06 06:53:06.005494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:53.450 [2024-12-06 06:53:06.005507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.634 ms 00:26:53.450 [2024-12-06 06:53:06.005516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.450 [2024-12-06 06:53:06.020747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.450 [2024-12-06 06:53:06.020779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:53.450 [2024-12-06 06:53:06.020793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.190 ms 00:26:53.450 [2024-12-06 06:53:06.020802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.450 [2024-12-06 06:53:06.020948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.450 [2024-12-06 06:53:06.020959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:53.450 [2024-12-06 06:53:06.020969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:26:53.450 [2024-12-06 06:53:06.020977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.450 [2024-12-06 06:53:06.043885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.450 [2024-12-06 06:53:06.043916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:53.450 [2024-12-06 06:53:06.043928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.886 ms 00:26:53.450 [2024-12-06 06:53:06.043935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.450 [2024-12-06 06:53:06.067045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.450 [2024-12-06 06:53:06.067182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:53.450 [2024-12-06 06:53:06.067201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.074 ms 00:26:53.450 [2024-12-06 06:53:06.067208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.450 [2024-12-06 06:53:06.089951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.450 [2024-12-06 06:53:06.090066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:53.450 [2024-12-06 06:53:06.090085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.707 ms 00:26:53.450 [2024-12-06 06:53:06.090092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.450 [2024-12-06 06:53:06.112580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.450 [2024-12-06 06:53:06.112692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:53.450 [2024-12-06 06:53:06.112710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.418 ms 00:26:53.450 [2024-12-06 06:53:06.112717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.450 [2024-12-06 06:53:06.112748] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:53.450 [2024-12-06 06:53:06.112762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:53.450 [2024-12-06 06:53:06.112776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:53.450 [2024-12-06 06:53:06.112784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:53.450 [2024-12-06 06:53:06.112793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:53.450 [2024-12-06 06:53:06.112800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:53.450 [2024-12-06 06:53:06.112810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:53.450 [2024-12-06 06:53:06.112818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:53.450 [2024-12-06 06:53:06.112829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:53.450 [2024-12-06 06:53:06.112837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:53.450 [2024-12-06 06:53:06.112846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:53.450 [2024-12-06 06:53:06.112853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:53.450 [2024-12-06 06:53:06.112862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:53.450 [2024-12-06 06:53:06.112869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:53.450 [2024-12-06 06:53:06.112878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:53.450 [2024-12-06 06:53:06.112885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:53.450 [2024-12-06 06:53:06.112894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:53.450 [2024-12-06 06:53:06.112902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:53.450 [2024-12-06 06:53:06.112911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:53.450 [2024-12-06 06:53:06.112918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:53.450 [2024-12-06 06:53:06.112928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:53.450 [2024-12-06 06:53:06.112936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:53.450 [2024-12-06 06:53:06.112945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:53.450 [2024-12-06 06:53:06.112952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:53.450 [2024-12-06 06:53:06.112962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:53.450 [2024-12-06 06:53:06.112969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:53.450 [2024-12-06 06:53:06.112978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:53.450 [2024-12-06 06:53:06.112985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:53.450 [2024-12-06 06:53:06.112995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:53.450 [2024-12-06 06:53:06.113003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:53.450 [2024-12-06 06:53:06.113012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:53.451 [2024-12-06 06:53:06.113020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:53.451 [2024-12-06 06:53:06.113029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:53.451 [2024-12-06 06:53:06.113036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:53.451 [2024-12-06 06:53:06.113045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:53.451 [2024-12-06 06:53:06.113052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:53.451 [2024-12-06 06:53:06.113061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:53.451 [2024-12-06 06:53:06.113068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:53.451 [2024-12-06 06:53:06.113077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:53.451 [2024-12-06 06:53:06.113085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:53.451 [2024-12-06 06:53:06.113095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:53.451 [2024-12-06 06:53:06.113102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:53.451 [2024-12-06 06:53:06.113111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:53.451 [2024-12-06 06:53:06.113118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:53.451 [2024-12-06 06:53:06.113127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:53.451 [2024-12-06 06:53:06.113134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:53.451 [2024-12-06 06:53:06.113144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:53.451 [2024-12-06 06:53:06.113151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:53.451 [2024-12-06 06:53:06.113160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:53.451 [2024-12-06 06:53:06.113168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:53.451 [2024-12-06 06:53:06.113176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:53.451 [2024-12-06 06:53:06.113184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:53.451 [2024-12-06 06:53:06.113193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:53.451 [2024-12-06 06:53:06.113200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:53.451 [2024-12-06 06:53:06.113209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:53.451 [2024-12-06 06:53:06.113216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:53.451 [2024-12-06 06:53:06.113320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:53.451 [2024-12-06 06:53:06.113328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:53.451 [2024-12-06 06:53:06.113337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:53.451 [2024-12-06 06:53:06.113345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:53.451 [2024-12-06 06:53:06.113354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:53.451 [2024-12-06 06:53:06.113361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:53.451 [2024-12-06 06:53:06.113370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:53.451 [2024-12-06 06:53:06.113378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:53.451 [2024-12-06 06:53:06.113387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:53.451 [2024-12-06 06:53:06.113395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:53.451 [2024-12-06 06:53:06.113403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:53.451 [2024-12-06 06:53:06.113411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:53.451 [2024-12-06 06:53:06.113419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:53.451 [2024-12-06 06:53:06.113427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:53.451 [2024-12-06 06:53:06.113435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:53.451 [2024-12-06 06:53:06.113443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:53.451 [2024-12-06 06:53:06.113455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:53.451 [2024-12-06 06:53:06.113480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:53.451 [2024-12-06 06:53:06.113490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:53.451 [2024-12-06 06:53:06.113497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:53.451 [2024-12-06 06:53:06.113506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:53.451 [2024-12-06 06:53:06.113513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:53.451 [2024-12-06 06:53:06.113523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:53.451 [2024-12-06 06:53:06.113530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:53.451 [2024-12-06 06:53:06.113539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:53.451 [2024-12-06 06:53:06.113546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:53.451 [2024-12-06 06:53:06.113556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:53.451 [2024-12-06 06:53:06.113563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:53.451 [2024-12-06 06:53:06.113572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:53.451 [2024-12-06 06:53:06.113579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:53.451 [2024-12-06 06:53:06.113588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:53.451 [2024-12-06 06:53:06.113596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:53.451 [2024-12-06 06:53:06.113607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:53.451 [2024-12-06 06:53:06.113614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:53.451 [2024-12-06 06:53:06.113623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:53.451 [2024-12-06 06:53:06.113630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:53.451 [2024-12-06 06:53:06.113639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:53.451 [2024-12-06 06:53:06.113646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:53.451 [2024-12-06 06:53:06.113655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:53.451 [2024-12-06 06:53:06.113664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:53.451 [2024-12-06 06:53:06.113673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:53.451 [2024-12-06 06:53:06.113681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:53.451 [2024-12-06 06:53:06.113691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:53.451 [2024-12-06 06:53:06.113698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:53.451 [2024-12-06 06:53:06.113707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:53.451 [2024-12-06 06:53:06.113723] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:53.451 [2024-12-06 06:53:06.113732] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d910f68e-4869-4d1c-9d11-c1e41849a5be 00:26:53.451 [2024-12-06 06:53:06.113740] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:53.451 [2024-12-06 06:53:06.113749] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:53.451 [2024-12-06 06:53:06.113759] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:53.451 [2024-12-06 06:53:06.113768] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:53.451 [2024-12-06 06:53:06.113775] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:53.451 [2024-12-06 06:53:06.113784] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:53.451 [2024-12-06 06:53:06.113791] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:53.451 [2024-12-06 06:53:06.113804] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:53.451 [2024-12-06 06:53:06.113811] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:53.451 [2024-12-06 06:53:06.113819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.451 [2024-12-06 06:53:06.113826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:53.451 [2024-12-06 06:53:06.113836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.073 ms 00:26:53.451 [2024-12-06 06:53:06.113845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.451 [2024-12-06 06:53:06.126253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.451 [2024-12-06 06:53:06.126281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:53.451 [2024-12-06 06:53:06.126293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.365 ms 00:26:53.451 [2024-12-06 06:53:06.126300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.451 [2024-12-06 06:53:06.126684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.451 [2024-12-06 06:53:06.126699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:53.451 [2024-12-06 06:53:06.126711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.349 ms 00:26:53.451 [2024-12-06 06:53:06.126718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.451 [2024-12-06 06:53:06.167935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:53.451 [2024-12-06 06:53:06.168057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:53.452 [2024-12-06 06:53:06.168076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:53.452 [2024-12-06 06:53:06.168084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.452 [2024-12-06 06:53:06.168143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:53.452 [2024-12-06 06:53:06.168151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:53.452 [2024-12-06 06:53:06.168162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:53.452 [2024-12-06 06:53:06.168169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.452 [2024-12-06 06:53:06.168236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:53.452 [2024-12-06 06:53:06.168246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:53.452 [2024-12-06 06:53:06.168256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:53.452 [2024-12-06 06:53:06.168263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.452 [2024-12-06 06:53:06.168284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:53.452 [2024-12-06 06:53:06.168291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:53.452 [2024-12-06 06:53:06.168300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:53.452 [2024-12-06 06:53:06.168309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.709 [2024-12-06 06:53:06.243720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:53.709 [2024-12-06 06:53:06.243768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:53.709 [2024-12-06 06:53:06.243781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:53.709 [2024-12-06 06:53:06.243789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.709 [2024-12-06 06:53:06.305284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:53.709 [2024-12-06 06:53:06.305332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:53.709 [2024-12-06 06:53:06.305347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:53.709 [2024-12-06 06:53:06.305358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.709 [2024-12-06 06:53:06.305450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:53.709 [2024-12-06 06:53:06.305459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:53.709 [2024-12-06 06:53:06.305651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:53.709 [2024-12-06 06:53:06.305673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.709 [2024-12-06 06:53:06.305745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:53.709 [2024-12-06 06:53:06.306070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:53.709 [2024-12-06 06:53:06.306085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:53.709 [2024-12-06 06:53:06.306093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.709 [2024-12-06 06:53:06.306196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:53.709 [2024-12-06 06:53:06.306205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:53.709 [2024-12-06 06:53:06.306215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:53.709 [2024-12-06 06:53:06.306222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.709 [2024-12-06 06:53:06.306254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:53.709 [2024-12-06 06:53:06.306264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:53.709 [2024-12-06 06:53:06.306273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:53.709 [2024-12-06 06:53:06.306280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.709 [2024-12-06 06:53:06.306318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:53.709 [2024-12-06 06:53:06.306326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:53.709 [2024-12-06 06:53:06.306335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:53.709 [2024-12-06 06:53:06.306343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.709 [2024-12-06 06:53:06.306385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:53.709 [2024-12-06 06:53:06.306394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:53.709 [2024-12-06 06:53:06.306403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:53.709 [2024-12-06 06:53:06.306411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.709 [2024-12-06 06:53:06.306551] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 337.523 ms, result 0 00:26:53.709 true 00:26:53.709 06:53:06 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 77580 00:26:53.709 06:53:06 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 77580 ']' 00:26:53.709 06:53:06 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 77580 00:26:53.709 06:53:06 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:26:53.709 06:53:06 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:53.709 06:53:06 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77580 00:26:53.709 killing process with pid 77580 00:26:53.709 06:53:06 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:53.709 06:53:06 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:53.709 06:53:06 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77580' 00:26:53.709 06:53:06 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 77580 00:26:53.709 06:53:06 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 77580 00:27:15.621 06:53:24 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:27:15.621 262144+0 records in 00:27:15.621 262144+0 records out 00:27:15.621 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.48869 s, 308 MB/s 00:27:15.621 06:53:28 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:27:17.517 06:53:30 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:17.775 [2024-12-06 06:53:30.292510] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:27:17.775 [2024-12-06 06:53:30.292605] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77828 ] 00:27:17.775 [2024-12-06 06:53:30.447283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:18.032 [2024-12-06 06:53:30.561276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:18.290 [2024-12-06 06:53:30.839769] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:18.290 [2024-12-06 06:53:30.839845] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:18.290 [2024-12-06 06:53:30.994133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.290 [2024-12-06 06:53:30.994448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:18.290 [2024-12-06 06:53:30.994508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:18.290 [2024-12-06 06:53:30.994524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.290 [2024-12-06 06:53:30.994624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.290 [2024-12-06 06:53:30.994658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:18.290 [2024-12-06 06:53:30.994681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:27:18.290 [2024-12-06 06:53:30.994701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.290 [2024-12-06 06:53:30.994751] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:18.290 [2024-12-06 06:53:30.995984] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:18.290 [2024-12-06 06:53:30.996049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.290 [2024-12-06 06:53:30.996071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:18.290 [2024-12-06 06:53:30.996093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.306 ms 00:27:18.290 [2024-12-06 06:53:30.996113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.290 [2024-12-06 06:53:30.997869] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:18.290 [2024-12-06 06:53:31.018866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.290 [2024-12-06 06:53:31.018910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:18.290 [2024-12-06 06:53:31.018925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.998 ms 00:27:18.290 [2024-12-06 06:53:31.018938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.290 [2024-12-06 06:53:31.019010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.290 [2024-12-06 06:53:31.019024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:18.290 [2024-12-06 06:53:31.019036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:27:18.290 [2024-12-06 06:53:31.019047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.290 [2024-12-06 06:53:31.024024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.290 [2024-12-06 06:53:31.024061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:18.290 [2024-12-06 06:53:31.024075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.888 ms 00:27:18.290 [2024-12-06 06:53:31.024092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.290 [2024-12-06 06:53:31.024182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.290 [2024-12-06 06:53:31.024194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:18.290 [2024-12-06 06:53:31.024206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:27:18.290 [2024-12-06 06:53:31.024217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.290 [2024-12-06 06:53:31.024263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.290 [2024-12-06 06:53:31.024276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:18.290 [2024-12-06 06:53:31.024287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:18.290 [2024-12-06 06:53:31.024299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.290 [2024-12-06 06:53:31.024331] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:18.549 [2024-12-06 06:53:31.029566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.549 [2024-12-06 06:53:31.029602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:18.549 [2024-12-06 06:53:31.029619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.241 ms 00:27:18.549 [2024-12-06 06:53:31.029631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.549 [2024-12-06 06:53:31.029674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.549 [2024-12-06 06:53:31.029686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:18.549 [2024-12-06 06:53:31.029698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:27:18.549 [2024-12-06 06:53:31.029710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.549 [2024-12-06 06:53:31.029769] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:18.549 [2024-12-06 06:53:31.029796] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:18.549 [2024-12-06 06:53:31.029844] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:18.549 [2024-12-06 06:53:31.029869] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:18.549 [2024-12-06 06:53:31.030010] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:18.549 [2024-12-06 06:53:31.030025] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:18.549 [2024-12-06 06:53:31.030040] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:18.549 [2024-12-06 06:53:31.030055] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:18.549 [2024-12-06 06:53:31.030068] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:18.549 [2024-12-06 06:53:31.030080] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:18.550 [2024-12-06 06:53:31.030091] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:18.550 [2024-12-06 06:53:31.030105] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:18.550 [2024-12-06 06:53:31.030116] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:18.550 [2024-12-06 06:53:31.030127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.550 [2024-12-06 06:53:31.030138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:18.550 [2024-12-06 06:53:31.030151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.360 ms 00:27:18.550 [2024-12-06 06:53:31.030161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.550 [2024-12-06 06:53:31.030279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.550 [2024-12-06 06:53:31.030296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:18.550 [2024-12-06 06:53:31.030309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:27:18.550 [2024-12-06 06:53:31.030320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.550 [2024-12-06 06:53:31.030476] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:18.550 [2024-12-06 06:53:31.030491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:18.550 [2024-12-06 06:53:31.030503] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:18.550 [2024-12-06 06:53:31.030516] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:18.550 [2024-12-06 06:53:31.030528] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:18.550 [2024-12-06 06:53:31.030539] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:18.550 [2024-12-06 06:53:31.030549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:18.550 [2024-12-06 06:53:31.030559] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:18.550 [2024-12-06 06:53:31.030571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:18.550 [2024-12-06 06:53:31.030581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:18.550 [2024-12-06 06:53:31.030592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:18.550 [2024-12-06 06:53:31.030603] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:18.550 [2024-12-06 06:53:31.030613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:18.550 [2024-12-06 06:53:31.030630] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:18.550 [2024-12-06 06:53:31.030641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:18.550 [2024-12-06 06:53:31.030651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:18.550 [2024-12-06 06:53:31.030661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:18.550 [2024-12-06 06:53:31.030671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:18.550 [2024-12-06 06:53:31.030681] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:18.550 [2024-12-06 06:53:31.030693] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:18.550 [2024-12-06 06:53:31.030703] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:18.550 [2024-12-06 06:53:31.030715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:18.550 [2024-12-06 06:53:31.030725] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:18.550 [2024-12-06 06:53:31.030736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:18.550 [2024-12-06 06:53:31.030746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:18.550 [2024-12-06 06:53:31.030756] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:18.550 [2024-12-06 06:53:31.030766] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:18.550 [2024-12-06 06:53:31.030776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:18.550 [2024-12-06 06:53:31.030786] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:18.550 [2024-12-06 06:53:31.030796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:18.550 [2024-12-06 06:53:31.030806] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:18.550 [2024-12-06 06:53:31.030817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:18.550 [2024-12-06 06:53:31.030827] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:18.550 [2024-12-06 06:53:31.030837] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:18.550 [2024-12-06 06:53:31.030847] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:18.550 [2024-12-06 06:53:31.030858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:18.550 [2024-12-06 06:53:31.030868] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:18.550 [2024-12-06 06:53:31.030878] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:18.550 [2024-12-06 06:53:31.030889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:18.550 [2024-12-06 06:53:31.030899] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:18.550 [2024-12-06 06:53:31.030909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:18.550 [2024-12-06 06:53:31.030919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:18.550 [2024-12-06 06:53:31.030929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:18.550 [2024-12-06 06:53:31.030939] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:18.550 [2024-12-06 06:53:31.030951] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:18.550 [2024-12-06 06:53:31.030962] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:18.550 [2024-12-06 06:53:31.030973] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:18.550 [2024-12-06 06:53:31.030984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:18.550 [2024-12-06 06:53:31.030995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:18.550 [2024-12-06 06:53:31.031005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:18.550 [2024-12-06 06:53:31.031016] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:18.550 [2024-12-06 06:53:31.031026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:18.550 [2024-12-06 06:53:31.031037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:18.550 [2024-12-06 06:53:31.031050] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:18.550 [2024-12-06 06:53:31.031063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:18.550 [2024-12-06 06:53:31.031079] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:18.550 [2024-12-06 06:53:31.031090] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:18.550 [2024-12-06 06:53:31.031102] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:18.550 [2024-12-06 06:53:31.031113] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:18.550 [2024-12-06 06:53:31.031125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:18.550 [2024-12-06 06:53:31.031136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:18.550 [2024-12-06 06:53:31.031147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:18.550 [2024-12-06 06:53:31.031159] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:18.550 [2024-12-06 06:53:31.031170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:18.550 [2024-12-06 06:53:31.031182] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:18.550 [2024-12-06 06:53:31.031193] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:18.550 [2024-12-06 06:53:31.031204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:18.550 [2024-12-06 06:53:31.031216] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:18.550 [2024-12-06 06:53:31.031227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:18.550 [2024-12-06 06:53:31.031238] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:18.550 [2024-12-06 06:53:31.031250] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:18.550 [2024-12-06 06:53:31.031262] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:18.550 [2024-12-06 06:53:31.031273] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:18.550 [2024-12-06 06:53:31.031285] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:18.550 [2024-12-06 06:53:31.031296] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:18.550 [2024-12-06 06:53:31.031308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.550 [2024-12-06 06:53:31.031319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:18.550 [2024-12-06 06:53:31.031330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.940 ms 00:27:18.550 [2024-12-06 06:53:31.031342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.550 [2024-12-06 06:53:31.070438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.550 [2024-12-06 06:53:31.070497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:18.550 [2024-12-06 06:53:31.070512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.025 ms 00:27:18.550 [2024-12-06 06:53:31.070528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.550 [2024-12-06 06:53:31.070637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.550 [2024-12-06 06:53:31.070649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:18.550 [2024-12-06 06:53:31.070662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:27:18.550 [2024-12-06 06:53:31.070673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.550 [2024-12-06 06:53:31.128649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.551 [2024-12-06 06:53:31.128689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:18.551 [2024-12-06 06:53:31.128704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.912 ms 00:27:18.551 [2024-12-06 06:53:31.128716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.551 [2024-12-06 06:53:31.128753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.551 [2024-12-06 06:53:31.128765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:18.551 [2024-12-06 06:53:31.128781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:18.551 [2024-12-06 06:53:31.128792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.551 [2024-12-06 06:53:31.129159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.551 [2024-12-06 06:53:31.129180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:18.551 [2024-12-06 06:53:31.129193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:27:18.551 [2024-12-06 06:53:31.129204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.551 [2024-12-06 06:53:31.129371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.551 [2024-12-06 06:53:31.129384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:18.551 [2024-12-06 06:53:31.129402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:27:18.551 [2024-12-06 06:53:31.129413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.551 [2024-12-06 06:53:31.150879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.551 [2024-12-06 06:53:31.150987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:18.551 [2024-12-06 06:53:31.151022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.435 ms 00:27:18.551 [2024-12-06 06:53:31.151046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.551 [2024-12-06 06:53:31.171171] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:27:18.551 [2024-12-06 06:53:31.171219] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:18.551 [2024-12-06 06:53:31.171232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.551 [2024-12-06 06:53:31.171240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:18.551 [2024-12-06 06:53:31.171249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.895 ms 00:27:18.551 [2024-12-06 06:53:31.171257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.551 [2024-12-06 06:53:31.195340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.551 [2024-12-06 06:53:31.195378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:18.551 [2024-12-06 06:53:31.195396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.042 ms 00:27:18.551 [2024-12-06 06:53:31.195403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.551 [2024-12-06 06:53:31.206483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.551 [2024-12-06 06:53:31.206512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:18.551 [2024-12-06 06:53:31.206521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.041 ms 00:27:18.551 [2024-12-06 06:53:31.206529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.551 [2024-12-06 06:53:31.217511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.551 [2024-12-06 06:53:31.217543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:18.551 [2024-12-06 06:53:31.217553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.952 ms 00:27:18.551 [2024-12-06 06:53:31.217561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.551 [2024-12-06 06:53:31.218151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.551 [2024-12-06 06:53:31.218175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:18.551 [2024-12-06 06:53:31.218184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.509 ms 00:27:18.551 [2024-12-06 06:53:31.218193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.551 [2024-12-06 06:53:31.271300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.551 [2024-12-06 06:53:31.271344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:18.551 [2024-12-06 06:53:31.271356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.091 ms 00:27:18.551 [2024-12-06 06:53:31.271368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.551 [2024-12-06 06:53:31.281403] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:18.551 [2024-12-06 06:53:31.283584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.551 [2024-12-06 06:53:31.283613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:18.551 [2024-12-06 06:53:31.283624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.166 ms 00:27:18.551 [2024-12-06 06:53:31.283633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.551 [2024-12-06 06:53:31.283705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.551 [2024-12-06 06:53:31.283716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:18.551 [2024-12-06 06:53:31.283726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:27:18.551 [2024-12-06 06:53:31.283734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.551 [2024-12-06 06:53:31.283800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.551 [2024-12-06 06:53:31.283810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:18.551 [2024-12-06 06:53:31.283819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:27:18.551 [2024-12-06 06:53:31.283827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.551 [2024-12-06 06:53:31.283847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.551 [2024-12-06 06:53:31.283855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:18.551 [2024-12-06 06:53:31.283864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:18.551 [2024-12-06 06:53:31.283872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.551 [2024-12-06 06:53:31.283905] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:18.551 [2024-12-06 06:53:31.283917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.551 [2024-12-06 06:53:31.283925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:18.551 [2024-12-06 06:53:31.283932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:27:18.551 [2024-12-06 06:53:31.283939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.809 [2024-12-06 06:53:31.306427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.809 [2024-12-06 06:53:31.306483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:18.809 [2024-12-06 06:53:31.306495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.470 ms 00:27:18.809 [2024-12-06 06:53:31.306506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.809 [2024-12-06 06:53:31.306577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.809 [2024-12-06 06:53:31.306586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:18.809 [2024-12-06 06:53:31.306594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:27:18.809 [2024-12-06 06:53:31.306601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.809 [2024-12-06 06:53:31.307753] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 313.216 ms, result 0 00:27:19.742  [2024-12-06T06:53:33.418Z] Copying: 27/1024 [MB] (27 MBps) [2024-12-06T06:53:34.352Z] Copying: 63/1024 [MB] (35 MBps) [2024-12-06T06:53:35.735Z] Copying: 94/1024 [MB] (31 MBps) [2024-12-06T06:53:36.667Z] Copying: 98072/1048576 [kB] (964 kBps) [2024-12-06T06:53:37.602Z] Copying: 119/1024 [MB] (23 MBps) [2024-12-06T06:53:38.537Z] Copying: 147/1024 [MB] (27 MBps) [2024-12-06T06:53:39.470Z] Copying: 184/1024 [MB] (36 MBps) [2024-12-06T06:53:40.401Z] Copying: 209/1024 [MB] (24 MBps) [2024-12-06T06:53:41.334Z] Copying: 245/1024 [MB] (36 MBps) [2024-12-06T06:53:42.709Z] Copying: 278/1024 [MB] (33 MBps) [2024-12-06T06:53:43.642Z] Copying: 302/1024 [MB] (23 MBps) [2024-12-06T06:53:44.575Z] Copying: 336/1024 [MB] (34 MBps) [2024-12-06T06:53:45.509Z] Copying: 379/1024 [MB] (42 MBps) [2024-12-06T06:53:46.443Z] Copying: 424/1024 [MB] (44 MBps) [2024-12-06T06:53:47.375Z] Copying: 468/1024 [MB] (44 MBps) [2024-12-06T06:53:48.750Z] Copying: 511/1024 [MB] (42 MBps) [2024-12-06T06:53:49.679Z] Copying: 554/1024 [MB] (43 MBps) [2024-12-06T06:53:50.610Z] Copying: 599/1024 [MB] (44 MBps) [2024-12-06T06:53:51.551Z] Copying: 644/1024 [MB] (45 MBps) [2024-12-06T06:53:52.484Z] Copying: 691/1024 [MB] (46 MBps) [2024-12-06T06:53:53.416Z] Copying: 737/1024 [MB] (46 MBps) [2024-12-06T06:53:54.347Z] Copying: 780/1024 [MB] (43 MBps) [2024-12-06T06:53:55.732Z] Copying: 824/1024 [MB] (44 MBps) [2024-12-06T06:53:56.662Z] Copying: 867/1024 [MB] (43 MBps) [2024-12-06T06:53:57.595Z] Copying: 914/1024 [MB] (46 MBps) [2024-12-06T06:53:58.530Z] Copying: 959/1024 [MB] (45 MBps) [2024-12-06T06:53:58.789Z] Copying: 1003/1024 [MB] (44 MBps) [2024-12-06T06:53:58.789Z] Copying: 1024/1024 [MB] (average 37 MBps)[2024-12-06 06:53:58.774844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.048 [2024-12-06 06:53:58.774894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:46.048 [2024-12-06 06:53:58.774907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:46.048 [2024-12-06 06:53:58.774915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.048 [2024-12-06 06:53:58.774934] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:46.048 [2024-12-06 06:53:58.777518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.048 [2024-12-06 06:53:58.777550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:46.048 [2024-12-06 06:53:58.777565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.570 ms 00:27:46.048 [2024-12-06 06:53:58.777574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.048 [2024-12-06 06:53:58.778955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.048 [2024-12-06 06:53:58.778986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:46.048 [2024-12-06 06:53:58.778995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.361 ms 00:27:46.048 [2024-12-06 06:53:58.779003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.308 [2024-12-06 06:53:58.793504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.308 [2024-12-06 06:53:58.793535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:46.308 [2024-12-06 06:53:58.793545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.486 ms 00:27:46.308 [2024-12-06 06:53:58.793552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.308 [2024-12-06 06:53:58.799669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.308 [2024-12-06 06:53:58.799695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:46.308 [2024-12-06 06:53:58.799704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.085 ms 00:27:46.308 [2024-12-06 06:53:58.799713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.308 [2024-12-06 06:53:58.822835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.308 [2024-12-06 06:53:58.822867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:46.308 [2024-12-06 06:53:58.822878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.075 ms 00:27:46.308 [2024-12-06 06:53:58.822886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.308 [2024-12-06 06:53:58.836904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.308 [2024-12-06 06:53:58.837069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:46.308 [2024-12-06 06:53:58.837085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.986 ms 00:27:46.308 [2024-12-06 06:53:58.837094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.308 [2024-12-06 06:53:58.837236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.308 [2024-12-06 06:53:58.837250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:46.308 [2024-12-06 06:53:58.837258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:27:46.308 [2024-12-06 06:53:58.837266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.308 [2024-12-06 06:53:58.859296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.308 [2024-12-06 06:53:58.859326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:46.308 [2024-12-06 06:53:58.859336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.015 ms 00:27:46.308 [2024-12-06 06:53:58.859343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.308 [2024-12-06 06:53:58.881068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.308 [2024-12-06 06:53:58.881098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:46.308 [2024-12-06 06:53:58.881108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.693 ms 00:27:46.308 [2024-12-06 06:53:58.881116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.308 [2024-12-06 06:53:58.902738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.308 [2024-12-06 06:53:58.902768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:46.309 [2024-12-06 06:53:58.902778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.590 ms 00:27:46.309 [2024-12-06 06:53:58.902786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.309 [2024-12-06 06:53:58.924500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.309 [2024-12-06 06:53:58.924529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:46.309 [2024-12-06 06:53:58.924539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.660 ms 00:27:46.309 [2024-12-06 06:53:58.924546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.309 [2024-12-06 06:53:58.924576] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:46.309 [2024-12-06 06:53:58.924592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:46.309 [2024-12-06 06:53:58.924606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:46.309 [2024-12-06 06:53:58.924616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:46.309 [2024-12-06 06:53:58.924624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:46.309 [2024-12-06 06:53:58.924632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:46.309 [2024-12-06 06:53:58.924639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:46.309 [2024-12-06 06:53:58.924647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:46.309 [2024-12-06 06:53:58.924654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:46.309 [2024-12-06 06:53:58.924662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:46.309 [2024-12-06 06:53:58.924670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:46.309 [2024-12-06 06:53:58.924678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:46.309 [2024-12-06 06:53:58.924685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:46.309 [2024-12-06 06:53:58.924693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:46.309 [2024-12-06 06:53:58.924700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:46.309 [2024-12-06 06:53:58.924708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:46.309 [2024-12-06 06:53:58.924716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:46.309 [2024-12-06 06:53:58.924724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:46.309 [2024-12-06 06:53:58.924731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:46.309 [2024-12-06 06:53:58.924738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:46.309 [2024-12-06 06:53:58.924746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:46.309 [2024-12-06 06:53:58.924753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:46.309 [2024-12-06 06:53:58.924761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:46.309 [2024-12-06 06:53:58.924768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:46.309 [2024-12-06 06:53:58.924775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:46.309 [2024-12-06 06:53:58.924783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:46.309 [2024-12-06 06:53:58.924790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:46.309 [2024-12-06 06:53:58.924800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:46.309 [2024-12-06 06:53:58.924807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:46.309 [2024-12-06 06:53:58.924814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:46.309 [2024-12-06 06:53:58.924822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:46.309 [2024-12-06 06:53:58.924829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:46.309 [2024-12-06 06:53:58.924837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:46.309 [2024-12-06 06:53:58.924845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:46.309 [2024-12-06 06:53:58.924852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:46.309 [2024-12-06 06:53:58.924859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:46.309 [2024-12-06 06:53:58.924866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:46.309 [2024-12-06 06:53:58.924875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:46.309 [2024-12-06 06:53:58.924882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:46.309 [2024-12-06 06:53:58.924890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:46.309 [2024-12-06 06:53:58.924897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:46.309 [2024-12-06 06:53:58.924904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:46.309 [2024-12-06 06:53:58.924911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:46.309 [2024-12-06 06:53:58.924918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:46.309 [2024-12-06 06:53:58.924925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:46.309 [2024-12-06 06:53:58.924933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:46.309 [2024-12-06 06:53:58.924940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:46.309 [2024-12-06 06:53:58.924948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:46.309 [2024-12-06 06:53:58.924955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:46.309 [2024-12-06 06:53:58.924963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:46.309 [2024-12-06 06:53:58.924970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:46.309 [2024-12-06 06:53:58.924977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:46.309 [2024-12-06 06:53:58.924984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:46.309 [2024-12-06 06:53:58.924992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:46.309 [2024-12-06 06:53:58.924999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:46.309 [2024-12-06 06:53:58.925006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:46.309 [2024-12-06 06:53:58.925013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:46.309 [2024-12-06 06:53:58.925021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:46.309 [2024-12-06 06:53:58.925028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:46.309 [2024-12-06 06:53:58.925036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:46.309 [2024-12-06 06:53:58.925043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:46.310 [2024-12-06 06:53:58.925050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:46.310 [2024-12-06 06:53:58.925057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:46.310 [2024-12-06 06:53:58.925064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:46.310 [2024-12-06 06:53:58.925073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:46.310 [2024-12-06 06:53:58.925080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:46.310 [2024-12-06 06:53:58.925087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:46.310 [2024-12-06 06:53:58.925094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:46.310 [2024-12-06 06:53:58.925102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:46.310 [2024-12-06 06:53:58.925110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:46.310 [2024-12-06 06:53:58.925117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:46.310 [2024-12-06 06:53:58.925124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:46.310 [2024-12-06 06:53:58.925131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:46.310 [2024-12-06 06:53:58.925139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:46.310 [2024-12-06 06:53:58.925146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:46.310 [2024-12-06 06:53:58.925154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:46.310 [2024-12-06 06:53:58.925161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:46.310 [2024-12-06 06:53:58.925169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:46.310 [2024-12-06 06:53:58.925176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:46.310 [2024-12-06 06:53:58.925183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:46.310 [2024-12-06 06:53:58.925191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:46.310 [2024-12-06 06:53:58.925198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:46.310 [2024-12-06 06:53:58.925205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:46.310 [2024-12-06 06:53:58.925212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:46.310 [2024-12-06 06:53:58.925219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:46.310 [2024-12-06 06:53:58.925227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:46.310 [2024-12-06 06:53:58.925235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:46.310 [2024-12-06 06:53:58.925243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:46.310 [2024-12-06 06:53:58.925250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:46.310 [2024-12-06 06:53:58.925257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:46.310 [2024-12-06 06:53:58.925265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:46.310 [2024-12-06 06:53:58.925272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:46.310 [2024-12-06 06:53:58.925282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:46.310 [2024-12-06 06:53:58.925289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:46.310 [2024-12-06 06:53:58.925296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:46.310 [2024-12-06 06:53:58.925303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:46.310 [2024-12-06 06:53:58.925314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:46.310 [2024-12-06 06:53:58.925322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:46.310 [2024-12-06 06:53:58.925329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:46.310 [2024-12-06 06:53:58.925337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:46.310 [2024-12-06 06:53:58.925344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:46.310 [2024-12-06 06:53:58.925359] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:46.310 [2024-12-06 06:53:58.925369] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d910f68e-4869-4d1c-9d11-c1e41849a5be 00:27:46.310 [2024-12-06 06:53:58.925377] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:46.310 [2024-12-06 06:53:58.925384] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:46.310 [2024-12-06 06:53:58.925390] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:46.310 [2024-12-06 06:53:58.925398] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:46.310 [2024-12-06 06:53:58.925406] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:46.310 [2024-12-06 06:53:58.925418] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:46.310 [2024-12-06 06:53:58.925426] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:46.310 [2024-12-06 06:53:58.925432] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:46.310 [2024-12-06 06:53:58.925438] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:46.310 [2024-12-06 06:53:58.925445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.310 [2024-12-06 06:53:58.925452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:46.310 [2024-12-06 06:53:58.925459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.869 ms 00:27:46.310 [2024-12-06 06:53:58.925483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.310 [2024-12-06 06:53:58.937502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.310 [2024-12-06 06:53:58.937529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:46.310 [2024-12-06 06:53:58.937539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.001 ms 00:27:46.310 [2024-12-06 06:53:58.937548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.310 [2024-12-06 06:53:58.937878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.310 [2024-12-06 06:53:58.937893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:46.310 [2024-12-06 06:53:58.937903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:27:46.310 [2024-12-06 06:53:58.937914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.310 [2024-12-06 06:53:58.970392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:46.310 [2024-12-06 06:53:58.970426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:46.310 [2024-12-06 06:53:58.970436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:46.310 [2024-12-06 06:53:58.970444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.310 [2024-12-06 06:53:58.970510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:46.310 [2024-12-06 06:53:58.970520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:46.310 [2024-12-06 06:53:58.970528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:46.310 [2024-12-06 06:53:58.970539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.310 [2024-12-06 06:53:58.970590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:46.311 [2024-12-06 06:53:58.970599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:46.311 [2024-12-06 06:53:58.970607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:46.311 [2024-12-06 06:53:58.970615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.311 [2024-12-06 06:53:58.970630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:46.311 [2024-12-06 06:53:58.970638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:46.311 [2024-12-06 06:53:58.970645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:46.311 [2024-12-06 06:53:58.970652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.569 [2024-12-06 06:53:59.045779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:46.569 [2024-12-06 06:53:59.045825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:46.569 [2024-12-06 06:53:59.045836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:46.569 [2024-12-06 06:53:59.045844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.569 [2024-12-06 06:53:59.107353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:46.569 [2024-12-06 06:53:59.107403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:46.569 [2024-12-06 06:53:59.107413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:46.569 [2024-12-06 06:53:59.107425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.569 [2024-12-06 06:53:59.107517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:46.569 [2024-12-06 06:53:59.107528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:46.569 [2024-12-06 06:53:59.107536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:46.569 [2024-12-06 06:53:59.107543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.569 [2024-12-06 06:53:59.107577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:46.569 [2024-12-06 06:53:59.107585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:46.569 [2024-12-06 06:53:59.107594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:46.569 [2024-12-06 06:53:59.107601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.569 [2024-12-06 06:53:59.107687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:46.569 [2024-12-06 06:53:59.107698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:46.569 [2024-12-06 06:53:59.107706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:46.569 [2024-12-06 06:53:59.107713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.569 [2024-12-06 06:53:59.107743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:46.569 [2024-12-06 06:53:59.107752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:46.569 [2024-12-06 06:53:59.107760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:46.569 [2024-12-06 06:53:59.107767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.569 [2024-12-06 06:53:59.107798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:46.569 [2024-12-06 06:53:59.107809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:46.569 [2024-12-06 06:53:59.107817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:46.569 [2024-12-06 06:53:59.107825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.569 [2024-12-06 06:53:59.107863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:46.569 [2024-12-06 06:53:59.107873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:46.569 [2024-12-06 06:53:59.107881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:46.569 [2024-12-06 06:53:59.107887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.569 [2024-12-06 06:53:59.107995] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 333.123 ms, result 0 00:27:49.851 00:27:49.851 00:27:49.851 06:54:02 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:27:49.851 [2024-12-06 06:54:02.346356] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:27:49.851 [2024-12-06 06:54:02.346497] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78137 ] 00:27:49.851 [2024-12-06 06:54:02.506316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:50.118 [2024-12-06 06:54:02.605294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:50.377 [2024-12-06 06:54:02.861079] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:50.377 [2024-12-06 06:54:02.861140] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:50.377 [2024-12-06 06:54:03.014626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.377 [2024-12-06 06:54:03.014812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:50.377 [2024-12-06 06:54:03.014833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:50.377 [2024-12-06 06:54:03.014843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.377 [2024-12-06 06:54:03.014896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.377 [2024-12-06 06:54:03.014908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:50.377 [2024-12-06 06:54:03.014916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:27:50.377 [2024-12-06 06:54:03.014923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.377 [2024-12-06 06:54:03.014943] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:50.377 [2024-12-06 06:54:03.015683] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:50.377 [2024-12-06 06:54:03.015701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.377 [2024-12-06 06:54:03.015710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:50.377 [2024-12-06 06:54:03.015718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.762 ms 00:27:50.377 [2024-12-06 06:54:03.015725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.377 [2024-12-06 06:54:03.016765] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:50.377 [2024-12-06 06:54:03.028943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.377 [2024-12-06 06:54:03.028977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:50.377 [2024-12-06 06:54:03.028989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.180 ms 00:27:50.377 [2024-12-06 06:54:03.028997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.377 [2024-12-06 06:54:03.029052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.377 [2024-12-06 06:54:03.029061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:50.377 [2024-12-06 06:54:03.029070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:27:50.377 [2024-12-06 06:54:03.029076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.377 [2024-12-06 06:54:03.033920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.377 [2024-12-06 06:54:03.034079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:50.377 [2024-12-06 06:54:03.034094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.789 ms 00:27:50.377 [2024-12-06 06:54:03.034107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.377 [2024-12-06 06:54:03.034175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.377 [2024-12-06 06:54:03.034184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:50.377 [2024-12-06 06:54:03.034193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:27:50.377 [2024-12-06 06:54:03.034201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.377 [2024-12-06 06:54:03.034243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.377 [2024-12-06 06:54:03.034253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:50.377 [2024-12-06 06:54:03.034261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:50.377 [2024-12-06 06:54:03.034269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.377 [2024-12-06 06:54:03.034292] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:50.377 [2024-12-06 06:54:03.037644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.377 [2024-12-06 06:54:03.037670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:50.377 [2024-12-06 06:54:03.037681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.356 ms 00:27:50.377 [2024-12-06 06:54:03.037689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.377 [2024-12-06 06:54:03.037719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.377 [2024-12-06 06:54:03.037728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:50.377 [2024-12-06 06:54:03.037736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:50.378 [2024-12-06 06:54:03.037743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.378 [2024-12-06 06:54:03.037762] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:50.378 [2024-12-06 06:54:03.037781] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:50.378 [2024-12-06 06:54:03.037815] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:50.378 [2024-12-06 06:54:03.037832] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:50.378 [2024-12-06 06:54:03.037933] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:50.378 [2024-12-06 06:54:03.037944] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:50.378 [2024-12-06 06:54:03.037955] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:50.378 [2024-12-06 06:54:03.037964] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:50.378 [2024-12-06 06:54:03.037973] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:50.378 [2024-12-06 06:54:03.037982] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:50.378 [2024-12-06 06:54:03.037989] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:50.378 [2024-12-06 06:54:03.037998] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:50.378 [2024-12-06 06:54:03.038006] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:50.378 [2024-12-06 06:54:03.038013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.378 [2024-12-06 06:54:03.038020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:50.378 [2024-12-06 06:54:03.038028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:27:50.378 [2024-12-06 06:54:03.038036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.378 [2024-12-06 06:54:03.038118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.378 [2024-12-06 06:54:03.038126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:50.378 [2024-12-06 06:54:03.038134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:27:50.378 [2024-12-06 06:54:03.038141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.378 [2024-12-06 06:54:03.038257] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:50.378 [2024-12-06 06:54:03.038268] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:50.378 [2024-12-06 06:54:03.038276] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:50.378 [2024-12-06 06:54:03.038284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:50.378 [2024-12-06 06:54:03.038291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:50.378 [2024-12-06 06:54:03.038298] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:50.378 [2024-12-06 06:54:03.038305] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:50.378 [2024-12-06 06:54:03.038313] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:50.378 [2024-12-06 06:54:03.038320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:50.378 [2024-12-06 06:54:03.038327] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:50.378 [2024-12-06 06:54:03.038334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:50.378 [2024-12-06 06:54:03.038341] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:50.378 [2024-12-06 06:54:03.038347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:50.378 [2024-12-06 06:54:03.038360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:50.378 [2024-12-06 06:54:03.038367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:50.378 [2024-12-06 06:54:03.038374] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:50.378 [2024-12-06 06:54:03.038381] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:50.378 [2024-12-06 06:54:03.038387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:50.378 [2024-12-06 06:54:03.038395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:50.378 [2024-12-06 06:54:03.038402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:50.378 [2024-12-06 06:54:03.038410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:50.378 [2024-12-06 06:54:03.038416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:50.378 [2024-12-06 06:54:03.038423] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:50.378 [2024-12-06 06:54:03.038429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:50.378 [2024-12-06 06:54:03.038435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:50.378 [2024-12-06 06:54:03.038442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:50.378 [2024-12-06 06:54:03.038449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:50.378 [2024-12-06 06:54:03.038455] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:50.378 [2024-12-06 06:54:03.038478] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:50.378 [2024-12-06 06:54:03.038485] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:50.378 [2024-12-06 06:54:03.038492] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:50.378 [2024-12-06 06:54:03.038499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:50.378 [2024-12-06 06:54:03.038506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:50.378 [2024-12-06 06:54:03.038512] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:50.378 [2024-12-06 06:54:03.038519] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:50.378 [2024-12-06 06:54:03.038526] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:50.378 [2024-12-06 06:54:03.038532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:50.378 [2024-12-06 06:54:03.038539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:50.378 [2024-12-06 06:54:03.038546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:50.378 [2024-12-06 06:54:03.038552] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:50.378 [2024-12-06 06:54:03.038559] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:50.378 [2024-12-06 06:54:03.038565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:50.378 [2024-12-06 06:54:03.038573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:50.378 [2024-12-06 06:54:03.038579] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:50.378 [2024-12-06 06:54:03.038587] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:50.378 [2024-12-06 06:54:03.038599] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:50.378 [2024-12-06 06:54:03.038607] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:50.378 [2024-12-06 06:54:03.038614] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:50.378 [2024-12-06 06:54:03.038621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:50.378 [2024-12-06 06:54:03.038627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:50.378 [2024-12-06 06:54:03.038635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:50.378 [2024-12-06 06:54:03.038641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:50.378 [2024-12-06 06:54:03.038648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:50.378 [2024-12-06 06:54:03.038656] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:50.378 [2024-12-06 06:54:03.038665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:50.378 [2024-12-06 06:54:03.038675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:50.378 [2024-12-06 06:54:03.038683] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:50.378 [2024-12-06 06:54:03.038690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:50.378 [2024-12-06 06:54:03.038697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:50.378 [2024-12-06 06:54:03.038703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:50.378 [2024-12-06 06:54:03.038711] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:50.378 [2024-12-06 06:54:03.038719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:50.378 [2024-12-06 06:54:03.038726] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:50.378 [2024-12-06 06:54:03.038733] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:50.378 [2024-12-06 06:54:03.038740] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:50.378 [2024-12-06 06:54:03.038747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:50.378 [2024-12-06 06:54:03.038754] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:50.378 [2024-12-06 06:54:03.038761] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:50.378 [2024-12-06 06:54:03.038768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:50.379 [2024-12-06 06:54:03.038775] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:50.379 [2024-12-06 06:54:03.038783] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:50.379 [2024-12-06 06:54:03.038791] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:50.379 [2024-12-06 06:54:03.038798] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:50.379 [2024-12-06 06:54:03.038806] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:50.379 [2024-12-06 06:54:03.038812] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:50.379 [2024-12-06 06:54:03.038819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.379 [2024-12-06 06:54:03.038826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:50.379 [2024-12-06 06:54:03.038833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.632 ms 00:27:50.379 [2024-12-06 06:54:03.038840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.379 [2024-12-06 06:54:03.064360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.379 [2024-12-06 06:54:03.064398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:50.379 [2024-12-06 06:54:03.064408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.478 ms 00:27:50.379 [2024-12-06 06:54:03.064420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.379 [2024-12-06 06:54:03.064520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.379 [2024-12-06 06:54:03.064529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:50.379 [2024-12-06 06:54:03.064538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:27:50.379 [2024-12-06 06:54:03.064546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.379 [2024-12-06 06:54:03.105574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.379 [2024-12-06 06:54:03.105772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:50.379 [2024-12-06 06:54:03.105791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.970 ms 00:27:50.379 [2024-12-06 06:54:03.105800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.379 [2024-12-06 06:54:03.105851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.379 [2024-12-06 06:54:03.105862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:50.379 [2024-12-06 06:54:03.105875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:50.379 [2024-12-06 06:54:03.105883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.379 [2024-12-06 06:54:03.106245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.379 [2024-12-06 06:54:03.106262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:50.379 [2024-12-06 06:54:03.106272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:27:50.379 [2024-12-06 06:54:03.106280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.379 [2024-12-06 06:54:03.106402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.379 [2024-12-06 06:54:03.106412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:50.379 [2024-12-06 06:54:03.106420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:27:50.379 [2024-12-06 06:54:03.106432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.638 [2024-12-06 06:54:03.119446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.638 [2024-12-06 06:54:03.119496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:50.638 [2024-12-06 06:54:03.119509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.994 ms 00:27:50.638 [2024-12-06 06:54:03.119517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.638 [2024-12-06 06:54:03.131593] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:50.638 [2024-12-06 06:54:03.131629] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:50.638 [2024-12-06 06:54:03.131641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.638 [2024-12-06 06:54:03.131649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:50.638 [2024-12-06 06:54:03.131658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.007 ms 00:27:50.638 [2024-12-06 06:54:03.131667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.638 [2024-12-06 06:54:03.159258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.638 [2024-12-06 06:54:03.159325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:50.638 [2024-12-06 06:54:03.159339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.546 ms 00:27:50.638 [2024-12-06 06:54:03.159348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.638 [2024-12-06 06:54:03.171096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.638 [2024-12-06 06:54:03.171135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:50.638 [2024-12-06 06:54:03.171146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.663 ms 00:27:50.638 [2024-12-06 06:54:03.171154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.638 [2024-12-06 06:54:03.182201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.638 [2024-12-06 06:54:03.182248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:50.638 [2024-12-06 06:54:03.182263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.010 ms 00:27:50.638 [2024-12-06 06:54:03.182275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.638 [2024-12-06 06:54:03.183197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.638 [2024-12-06 06:54:03.183231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:50.638 [2024-12-06 06:54:03.183250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.806 ms 00:27:50.638 [2024-12-06 06:54:03.183263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.638 [2024-12-06 06:54:03.238854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.638 [2024-12-06 06:54:03.238912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:50.638 [2024-12-06 06:54:03.238931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.565 ms 00:27:50.638 [2024-12-06 06:54:03.238939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.638 [2024-12-06 06:54:03.249669] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:50.638 [2024-12-06 06:54:03.252249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.638 [2024-12-06 06:54:03.252280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:50.638 [2024-12-06 06:54:03.252294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.257 ms 00:27:50.638 [2024-12-06 06:54:03.252303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.638 [2024-12-06 06:54:03.252405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.638 [2024-12-06 06:54:03.252417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:50.638 [2024-12-06 06:54:03.252429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:27:50.638 [2024-12-06 06:54:03.252437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.638 [2024-12-06 06:54:03.252518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.638 [2024-12-06 06:54:03.252529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:50.638 [2024-12-06 06:54:03.252538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:27:50.638 [2024-12-06 06:54:03.252545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.638 [2024-12-06 06:54:03.252563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.638 [2024-12-06 06:54:03.252572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:50.638 [2024-12-06 06:54:03.252580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:50.638 [2024-12-06 06:54:03.252588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.638 [2024-12-06 06:54:03.252620] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:50.639 [2024-12-06 06:54:03.252629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.639 [2024-12-06 06:54:03.252637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:50.639 [2024-12-06 06:54:03.252645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:50.639 [2024-12-06 06:54:03.252653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.639 [2024-12-06 06:54:03.275702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.639 [2024-12-06 06:54:03.275864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:50.639 [2024-12-06 06:54:03.276377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.030 ms 00:27:50.639 [2024-12-06 06:54:03.276421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.639 [2024-12-06 06:54:03.276592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.639 [2024-12-06 06:54:03.276706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:50.639 [2024-12-06 06:54:03.276756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:27:50.639 [2024-12-06 06:54:03.276779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.639 [2024-12-06 06:54:03.277814] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 262.765 ms, result 0 00:27:52.012  [2024-12-06T06:54:05.685Z] Copying: 45/1024 [MB] (45 MBps) [2024-12-06T06:54:06.616Z] Copying: 91/1024 [MB] (46 MBps) [2024-12-06T06:54:07.549Z] Copying: 139/1024 [MB] (47 MBps) [2024-12-06T06:54:08.482Z] Copying: 186/1024 [MB] (47 MBps) [2024-12-06T06:54:09.858Z] Copying: 232/1024 [MB] (45 MBps) [2024-12-06T06:54:10.797Z] Copying: 279/1024 [MB] (46 MBps) [2024-12-06T06:54:11.737Z] Copying: 307/1024 [MB] (28 MBps) [2024-12-06T06:54:12.672Z] Copying: 339/1024 [MB] (32 MBps) [2024-12-06T06:54:13.611Z] Copying: 369/1024 [MB] (29 MBps) [2024-12-06T06:54:14.552Z] Copying: 397/1024 [MB] (27 MBps) [2024-12-06T06:54:15.493Z] Copying: 434/1024 [MB] (37 MBps) [2024-12-06T06:54:16.872Z] Copying: 469/1024 [MB] (34 MBps) [2024-12-06T06:54:17.807Z] Copying: 497/1024 [MB] (27 MBps) [2024-12-06T06:54:18.742Z] Copying: 538/1024 [MB] (41 MBps) [2024-12-06T06:54:19.675Z] Copying: 578/1024 [MB] (40 MBps) [2024-12-06T06:54:20.629Z] Copying: 625/1024 [MB] (46 MBps) [2024-12-06T06:54:21.564Z] Copying: 670/1024 [MB] (44 MBps) [2024-12-06T06:54:22.498Z] Copying: 717/1024 [MB] (46 MBps) [2024-12-06T06:54:23.870Z] Copying: 763/1024 [MB] (45 MBps) [2024-12-06T06:54:24.807Z] Copying: 809/1024 [MB] (46 MBps) [2024-12-06T06:54:25.739Z] Copying: 856/1024 [MB] (46 MBps) [2024-12-06T06:54:26.674Z] Copying: 903/1024 [MB] (46 MBps) [2024-12-06T06:54:27.631Z] Copying: 948/1024 [MB] (45 MBps) [2024-12-06T06:54:28.574Z] Copying: 991/1024 [MB] (42 MBps) [2024-12-06T06:54:29.957Z] Copying: 1024/1024 [MB] (average 41 MBps)[2024-12-06 06:54:29.860438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.216 [2024-12-06 06:54:29.860525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:17.216 [2024-12-06 06:54:29.860542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:17.216 [2024-12-06 06:54:29.860551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.216 [2024-12-06 06:54:29.860575] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:17.216 [2024-12-06 06:54:29.863337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.216 [2024-12-06 06:54:29.863377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:17.216 [2024-12-06 06:54:29.863406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.745 ms 00:28:17.216 [2024-12-06 06:54:29.863416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.216 [2024-12-06 06:54:29.863658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.216 [2024-12-06 06:54:29.863675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:17.216 [2024-12-06 06:54:29.863685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.217 ms 00:28:17.216 [2024-12-06 06:54:29.863695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.216 [2024-12-06 06:54:29.867134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.216 [2024-12-06 06:54:29.867157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:17.216 [2024-12-06 06:54:29.867168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.426 ms 00:28:17.216 [2024-12-06 06:54:29.867180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.216 [2024-12-06 06:54:29.873442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.216 [2024-12-06 06:54:29.873668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:17.216 [2024-12-06 06:54:29.873686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.245 ms 00:28:17.216 [2024-12-06 06:54:29.873699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.216 [2024-12-06 06:54:29.898284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.216 [2024-12-06 06:54:29.898323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:17.216 [2024-12-06 06:54:29.898335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.512 ms 00:28:17.216 [2024-12-06 06:54:29.898344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.216 [2024-12-06 06:54:29.912371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.216 [2024-12-06 06:54:29.912405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:17.216 [2024-12-06 06:54:29.912418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.003 ms 00:28:17.216 [2024-12-06 06:54:29.912427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.216 [2024-12-06 06:54:29.912581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.216 [2024-12-06 06:54:29.912594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:17.216 [2024-12-06 06:54:29.912603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:28:17.216 [2024-12-06 06:54:29.912611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.216 [2024-12-06 06:54:29.935508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.216 [2024-12-06 06:54:29.935656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:17.216 [2024-12-06 06:54:29.935673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.881 ms 00:28:17.216 [2024-12-06 06:54:29.935681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.478 [2024-12-06 06:54:29.958105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.478 [2024-12-06 06:54:29.958138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:17.478 [2024-12-06 06:54:29.958150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.404 ms 00:28:17.478 [2024-12-06 06:54:29.958158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.478 [2024-12-06 06:54:29.980151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.478 [2024-12-06 06:54:29.980182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:17.478 [2024-12-06 06:54:29.980193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.972 ms 00:28:17.478 [2024-12-06 06:54:29.980201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.478 [2024-12-06 06:54:30.011906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.478 [2024-12-06 06:54:30.011938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:17.478 [2024-12-06 06:54:30.011949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.662 ms 00:28:17.478 [2024-12-06 06:54:30.011958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.478 [2024-12-06 06:54:30.011977] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:17.478 [2024-12-06 06:54:30.011998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:17.478 [2024-12-06 06:54:30.012011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:17.478 [2024-12-06 06:54:30.012020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:17.478 [2024-12-06 06:54:30.012028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:17.478 [2024-12-06 06:54:30.012036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:17.478 [2024-12-06 06:54:30.012044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:17.478 [2024-12-06 06:54:30.012051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:17.478 [2024-12-06 06:54:30.012060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:17.478 [2024-12-06 06:54:30.012072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:17.478 [2024-12-06 06:54:30.012085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:17.478 [2024-12-06 06:54:30.012099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:17.478 [2024-12-06 06:54:30.012107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:17.478 [2024-12-06 06:54:30.012115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:17.478 [2024-12-06 06:54:30.012122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:17.478 [2024-12-06 06:54:30.012130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:17.478 [2024-12-06 06:54:30.012138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:17.478 [2024-12-06 06:54:30.012145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:17.478 [2024-12-06 06:54:30.012153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:17.478 [2024-12-06 06:54:30.012160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:17.478 [2024-12-06 06:54:30.012168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:17.478 [2024-12-06 06:54:30.012176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:17.478 [2024-12-06 06:54:30.012184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:17.478 [2024-12-06 06:54:30.012191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:17.478 [2024-12-06 06:54:30.012199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:17.478 [2024-12-06 06:54:30.012208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:17.478 [2024-12-06 06:54:30.012215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:17.478 [2024-12-06 06:54:30.012234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:17.478 [2024-12-06 06:54:30.012242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:17.478 [2024-12-06 06:54:30.012252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:17.478 [2024-12-06 06:54:30.012260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:17.478 [2024-12-06 06:54:30.012268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:17.478 [2024-12-06 06:54:30.012276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:17.478 [2024-12-06 06:54:30.012284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:17.478 [2024-12-06 06:54:30.012292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:17.478 [2024-12-06 06:54:30.012299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:17.478 [2024-12-06 06:54:30.012307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:17.478 [2024-12-06 06:54:30.012314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:17.478 [2024-12-06 06:54:30.012322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:17.478 [2024-12-06 06:54:30.012330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:17.478 [2024-12-06 06:54:30.012338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:17.478 [2024-12-06 06:54:30.012345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:17.478 [2024-12-06 06:54:30.012353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:17.478 [2024-12-06 06:54:30.012361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:17.478 [2024-12-06 06:54:30.012368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:17.478 [2024-12-06 06:54:30.012376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:17.478 [2024-12-06 06:54:30.012383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:17.478 [2024-12-06 06:54:30.012391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:17.478 [2024-12-06 06:54:30.012398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:17.478 [2024-12-06 06:54:30.012406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:17.478 [2024-12-06 06:54:30.012414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:17.478 [2024-12-06 06:54:30.012421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:17.478 [2024-12-06 06:54:30.012428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:17.478 [2024-12-06 06:54:30.012435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:17.478 [2024-12-06 06:54:30.012443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:17.478 [2024-12-06 06:54:30.012450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:17.478 [2024-12-06 06:54:30.012458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:17.478 [2024-12-06 06:54:30.012482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:17.478 [2024-12-06 06:54:30.012490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:17.478 [2024-12-06 06:54:30.012498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:17.478 [2024-12-06 06:54:30.012505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:17.478 [2024-12-06 06:54:30.012514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:17.478 [2024-12-06 06:54:30.012522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:17.478 [2024-12-06 06:54:30.012530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:17.478 [2024-12-06 06:54:30.012539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:17.478 [2024-12-06 06:54:30.012546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:17.478 [2024-12-06 06:54:30.012554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:17.478 [2024-12-06 06:54:30.012562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:17.478 [2024-12-06 06:54:30.012570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:17.478 [2024-12-06 06:54:30.012577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:17.478 [2024-12-06 06:54:30.012585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:17.478 [2024-12-06 06:54:30.012593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:17.478 [2024-12-06 06:54:30.012601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:17.478 [2024-12-06 06:54:30.012608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:17.479 [2024-12-06 06:54:30.012616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:17.479 [2024-12-06 06:54:30.012623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:17.479 [2024-12-06 06:54:30.012631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:17.479 [2024-12-06 06:54:30.012638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:17.479 [2024-12-06 06:54:30.012646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:17.479 [2024-12-06 06:54:30.012653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:17.479 [2024-12-06 06:54:30.012660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:17.479 [2024-12-06 06:54:30.012668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:17.479 [2024-12-06 06:54:30.012675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:17.479 [2024-12-06 06:54:30.012683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:17.479 [2024-12-06 06:54:30.012691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:17.479 [2024-12-06 06:54:30.012699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:17.479 [2024-12-06 06:54:30.012707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:17.479 [2024-12-06 06:54:30.012714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:17.479 [2024-12-06 06:54:30.012721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:17.479 [2024-12-06 06:54:30.012729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:17.479 [2024-12-06 06:54:30.012737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:17.479 [2024-12-06 06:54:30.012744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:17.479 [2024-12-06 06:54:30.012751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:17.479 [2024-12-06 06:54:30.012760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:17.479 [2024-12-06 06:54:30.012767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:17.479 [2024-12-06 06:54:30.012776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:17.479 [2024-12-06 06:54:30.012784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:17.479 [2024-12-06 06:54:30.012792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:17.479 [2024-12-06 06:54:30.012800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:17.479 [2024-12-06 06:54:30.012807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:17.479 [2024-12-06 06:54:30.012815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:17.479 [2024-12-06 06:54:30.012831] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:17.479 [2024-12-06 06:54:30.012839] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d910f68e-4869-4d1c-9d11-c1e41849a5be 00:28:17.479 [2024-12-06 06:54:30.012847] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:17.479 [2024-12-06 06:54:30.012855] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:17.479 [2024-12-06 06:54:30.012862] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:17.479 [2024-12-06 06:54:30.012870] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:17.479 [2024-12-06 06:54:30.012902] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:17.479 [2024-12-06 06:54:30.012910] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:17.479 [2024-12-06 06:54:30.012918] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:17.479 [2024-12-06 06:54:30.012925] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:17.479 [2024-12-06 06:54:30.012931] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:17.479 [2024-12-06 06:54:30.012939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.479 [2024-12-06 06:54:30.012946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:17.479 [2024-12-06 06:54:30.012955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.963 ms 00:28:17.479 [2024-12-06 06:54:30.012964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.479 [2024-12-06 06:54:30.025910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.479 [2024-12-06 06:54:30.025938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:17.479 [2024-12-06 06:54:30.025949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.917 ms 00:28:17.479 [2024-12-06 06:54:30.025957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.479 [2024-12-06 06:54:30.026311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.479 [2024-12-06 06:54:30.026326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:17.479 [2024-12-06 06:54:30.026340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.337 ms 00:28:17.479 [2024-12-06 06:54:30.026347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.479 [2024-12-06 06:54:30.061036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:17.479 [2024-12-06 06:54:30.061201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:17.479 [2024-12-06 06:54:30.061218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:17.479 [2024-12-06 06:54:30.061227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.479 [2024-12-06 06:54:30.061285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:17.479 [2024-12-06 06:54:30.061295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:17.479 [2024-12-06 06:54:30.061308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:17.479 [2024-12-06 06:54:30.061316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.479 [2024-12-06 06:54:30.061392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:17.479 [2024-12-06 06:54:30.061403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:17.479 [2024-12-06 06:54:30.061412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:17.479 [2024-12-06 06:54:30.061419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.479 [2024-12-06 06:54:30.061434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:17.479 [2024-12-06 06:54:30.061442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:17.479 [2024-12-06 06:54:30.061450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:17.479 [2024-12-06 06:54:30.061477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.479 [2024-12-06 06:54:30.142924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:17.479 [2024-12-06 06:54:30.142968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:17.479 [2024-12-06 06:54:30.142980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:17.479 [2024-12-06 06:54:30.142989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.479 [2024-12-06 06:54:30.209244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:17.479 [2024-12-06 06:54:30.209289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:17.479 [2024-12-06 06:54:30.209306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:17.479 [2024-12-06 06:54:30.209315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.479 [2024-12-06 06:54:30.209375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:17.479 [2024-12-06 06:54:30.209384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:17.479 [2024-12-06 06:54:30.209393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:17.479 [2024-12-06 06:54:30.209401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.479 [2024-12-06 06:54:30.209500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:17.479 [2024-12-06 06:54:30.209512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:17.479 [2024-12-06 06:54:30.209521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:17.479 [2024-12-06 06:54:30.209530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.479 [2024-12-06 06:54:30.209624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:17.479 [2024-12-06 06:54:30.209635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:17.479 [2024-12-06 06:54:30.209644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:17.479 [2024-12-06 06:54:30.209653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.479 [2024-12-06 06:54:30.209684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:17.479 [2024-12-06 06:54:30.209693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:17.479 [2024-12-06 06:54:30.209703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:17.479 [2024-12-06 06:54:30.209711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.479 [2024-12-06 06:54:30.209753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:17.479 [2024-12-06 06:54:30.209762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:17.479 [2024-12-06 06:54:30.209771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:17.479 [2024-12-06 06:54:30.209779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.479 [2024-12-06 06:54:30.209822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:17.479 [2024-12-06 06:54:30.209832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:17.479 [2024-12-06 06:54:30.209843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:17.479 [2024-12-06 06:54:30.209852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.479 [2024-12-06 06:54:30.209976] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 349.509 ms, result 0 00:28:18.418 00:28:18.418 00:28:18.418 06:54:30 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:20.942 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:28:20.942 06:54:33 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:28:20.942 [2024-12-06 06:54:33.121449] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:28:20.942 [2024-12-06 06:54:33.121585] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78458 ] 00:28:20.942 [2024-12-06 06:54:33.281646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:20.942 [2024-12-06 06:54:33.381819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:20.942 [2024-12-06 06:54:33.640010] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:20.942 [2024-12-06 06:54:33.640233] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:21.200 [2024-12-06 06:54:33.794657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.200 [2024-12-06 06:54:33.794854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:21.200 [2024-12-06 06:54:33.794875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:21.200 [2024-12-06 06:54:33.794884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.200 [2024-12-06 06:54:33.794941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.200 [2024-12-06 06:54:33.794953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:21.200 [2024-12-06 06:54:33.794961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:28:21.200 [2024-12-06 06:54:33.794969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.200 [2024-12-06 06:54:33.794989] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:21.200 [2024-12-06 06:54:33.795724] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:21.200 [2024-12-06 06:54:33.795747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.200 [2024-12-06 06:54:33.795755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:21.200 [2024-12-06 06:54:33.795764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.763 ms 00:28:21.200 [2024-12-06 06:54:33.795771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.200 [2024-12-06 06:54:33.797204] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:21.200 [2024-12-06 06:54:33.809427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.200 [2024-12-06 06:54:33.809485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:21.200 [2024-12-06 06:54:33.809499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.225 ms 00:28:21.200 [2024-12-06 06:54:33.809507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.200 [2024-12-06 06:54:33.809570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.200 [2024-12-06 06:54:33.809580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:21.200 [2024-12-06 06:54:33.809588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:28:21.200 [2024-12-06 06:54:33.809596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.200 [2024-12-06 06:54:33.814703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.200 [2024-12-06 06:54:33.814735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:21.200 [2024-12-06 06:54:33.814745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.023 ms 00:28:21.200 [2024-12-06 06:54:33.814756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.200 [2024-12-06 06:54:33.814822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.200 [2024-12-06 06:54:33.814830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:21.200 [2024-12-06 06:54:33.814838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:28:21.200 [2024-12-06 06:54:33.814845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.200 [2024-12-06 06:54:33.814887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.200 [2024-12-06 06:54:33.814897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:21.200 [2024-12-06 06:54:33.814905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:21.200 [2024-12-06 06:54:33.814912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.200 [2024-12-06 06:54:33.814937] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:21.200 [2024-12-06 06:54:33.818423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.200 [2024-12-06 06:54:33.818450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:21.200 [2024-12-06 06:54:33.818471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.493 ms 00:28:21.200 [2024-12-06 06:54:33.818479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.200 [2024-12-06 06:54:33.818509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.200 [2024-12-06 06:54:33.818517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:21.200 [2024-12-06 06:54:33.818525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:28:21.200 [2024-12-06 06:54:33.818532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.200 [2024-12-06 06:54:33.818551] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:21.200 [2024-12-06 06:54:33.818570] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:21.200 [2024-12-06 06:54:33.818604] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:21.200 [2024-12-06 06:54:33.818621] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:21.200 [2024-12-06 06:54:33.818724] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:21.200 [2024-12-06 06:54:33.818734] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:21.200 [2024-12-06 06:54:33.818744] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:21.200 [2024-12-06 06:54:33.818753] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:21.200 [2024-12-06 06:54:33.818762] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:21.201 [2024-12-06 06:54:33.818770] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:21.201 [2024-12-06 06:54:33.818778] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:21.201 [2024-12-06 06:54:33.818787] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:21.201 [2024-12-06 06:54:33.818794] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:21.201 [2024-12-06 06:54:33.818802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.201 [2024-12-06 06:54:33.818809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:21.201 [2024-12-06 06:54:33.818817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:28:21.201 [2024-12-06 06:54:33.818824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.201 [2024-12-06 06:54:33.818905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.201 [2024-12-06 06:54:33.818913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:21.201 [2024-12-06 06:54:33.818920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:28:21.201 [2024-12-06 06:54:33.818926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.201 [2024-12-06 06:54:33.819042] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:21.201 [2024-12-06 06:54:33.819052] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:21.201 [2024-12-06 06:54:33.819060] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:21.201 [2024-12-06 06:54:33.819068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:21.201 [2024-12-06 06:54:33.819076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:21.201 [2024-12-06 06:54:33.819083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:21.201 [2024-12-06 06:54:33.819089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:21.201 [2024-12-06 06:54:33.819097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:21.201 [2024-12-06 06:54:33.819104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:21.201 [2024-12-06 06:54:33.819110] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:21.201 [2024-12-06 06:54:33.819117] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:21.201 [2024-12-06 06:54:33.819124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:21.201 [2024-12-06 06:54:33.819130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:21.201 [2024-12-06 06:54:33.819142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:21.201 [2024-12-06 06:54:33.819149] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:21.201 [2024-12-06 06:54:33.819156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:21.201 [2024-12-06 06:54:33.819163] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:21.201 [2024-12-06 06:54:33.819169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:21.201 [2024-12-06 06:54:33.819175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:21.201 [2024-12-06 06:54:33.819182] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:21.201 [2024-12-06 06:54:33.819188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:21.201 [2024-12-06 06:54:33.819195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:21.201 [2024-12-06 06:54:33.819201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:21.201 [2024-12-06 06:54:33.819207] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:21.201 [2024-12-06 06:54:33.819214] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:21.201 [2024-12-06 06:54:33.819220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:21.201 [2024-12-06 06:54:33.819227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:21.201 [2024-12-06 06:54:33.819233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:21.201 [2024-12-06 06:54:33.819239] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:21.201 [2024-12-06 06:54:33.819246] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:21.201 [2024-12-06 06:54:33.819252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:21.201 [2024-12-06 06:54:33.819258] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:21.201 [2024-12-06 06:54:33.819264] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:21.201 [2024-12-06 06:54:33.819271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:21.201 [2024-12-06 06:54:33.819277] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:21.201 [2024-12-06 06:54:33.819283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:21.201 [2024-12-06 06:54:33.819289] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:21.201 [2024-12-06 06:54:33.819296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:21.201 [2024-12-06 06:54:33.819302] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:21.201 [2024-12-06 06:54:33.819308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:21.201 [2024-12-06 06:54:33.819315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:21.201 [2024-12-06 06:54:33.819321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:21.201 [2024-12-06 06:54:33.819328] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:21.201 [2024-12-06 06:54:33.819334] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:21.201 [2024-12-06 06:54:33.819341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:21.201 [2024-12-06 06:54:33.819348] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:21.201 [2024-12-06 06:54:33.819354] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:21.201 [2024-12-06 06:54:33.819364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:21.201 [2024-12-06 06:54:33.819371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:21.201 [2024-12-06 06:54:33.819377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:21.201 [2024-12-06 06:54:33.819394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:21.201 [2024-12-06 06:54:33.819401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:21.201 [2024-12-06 06:54:33.819408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:21.201 [2024-12-06 06:54:33.819416] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:21.201 [2024-12-06 06:54:33.819425] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:21.201 [2024-12-06 06:54:33.819436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:21.201 [2024-12-06 06:54:33.819443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:21.201 [2024-12-06 06:54:33.819451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:21.201 [2024-12-06 06:54:33.819458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:21.201 [2024-12-06 06:54:33.819481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:21.201 [2024-12-06 06:54:33.819489] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:21.201 [2024-12-06 06:54:33.819497] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:21.201 [2024-12-06 06:54:33.819504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:21.201 [2024-12-06 06:54:33.819511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:21.201 [2024-12-06 06:54:33.819518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:21.201 [2024-12-06 06:54:33.819525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:21.201 [2024-12-06 06:54:33.819533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:21.201 [2024-12-06 06:54:33.819540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:21.201 [2024-12-06 06:54:33.819548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:21.201 [2024-12-06 06:54:33.819555] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:21.201 [2024-12-06 06:54:33.819568] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:21.201 [2024-12-06 06:54:33.819576] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:21.201 [2024-12-06 06:54:33.819583] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:21.201 [2024-12-06 06:54:33.819590] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:21.201 [2024-12-06 06:54:33.819597] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:21.201 [2024-12-06 06:54:33.819604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.201 [2024-12-06 06:54:33.819611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:21.201 [2024-12-06 06:54:33.819619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.631 ms 00:28:21.201 [2024-12-06 06:54:33.819626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.201 [2024-12-06 06:54:33.845240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.201 [2024-12-06 06:54:33.845391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:21.201 [2024-12-06 06:54:33.845407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.563 ms 00:28:21.201 [2024-12-06 06:54:33.845419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.201 [2024-12-06 06:54:33.845514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.201 [2024-12-06 06:54:33.845523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:21.201 [2024-12-06 06:54:33.845532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:28:21.201 [2024-12-06 06:54:33.845540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.201 [2024-12-06 06:54:33.889818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.202 [2024-12-06 06:54:33.889973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:21.202 [2024-12-06 06:54:33.889992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.229 ms 00:28:21.202 [2024-12-06 06:54:33.890000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.202 [2024-12-06 06:54:33.890041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.202 [2024-12-06 06:54:33.890051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:21.202 [2024-12-06 06:54:33.890063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:21.202 [2024-12-06 06:54:33.890071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.202 [2024-12-06 06:54:33.890431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.202 [2024-12-06 06:54:33.890446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:21.202 [2024-12-06 06:54:33.890456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:28:21.202 [2024-12-06 06:54:33.890482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.202 [2024-12-06 06:54:33.890603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.202 [2024-12-06 06:54:33.890612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:21.202 [2024-12-06 06:54:33.890625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:28:21.202 [2024-12-06 06:54:33.890633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.202 [2024-12-06 06:54:33.903550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.202 [2024-12-06 06:54:33.903582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:21.202 [2024-12-06 06:54:33.903592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.896 ms 00:28:21.202 [2024-12-06 06:54:33.903599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.202 [2024-12-06 06:54:33.915678] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:21.202 [2024-12-06 06:54:33.915710] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:21.202 [2024-12-06 06:54:33.915721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.202 [2024-12-06 06:54:33.915729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:21.202 [2024-12-06 06:54:33.915737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.033 ms 00:28:21.202 [2024-12-06 06:54:33.915744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.459 [2024-12-06 06:54:33.939871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.459 [2024-12-06 06:54:33.939924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:21.459 [2024-12-06 06:54:33.939936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.090 ms 00:28:21.459 [2024-12-06 06:54:33.939945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.459 [2024-12-06 06:54:33.951255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.459 [2024-12-06 06:54:33.951288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:21.459 [2024-12-06 06:54:33.951298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.257 ms 00:28:21.459 [2024-12-06 06:54:33.951305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.459 [2024-12-06 06:54:33.962240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.459 [2024-12-06 06:54:33.962270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:21.459 [2024-12-06 06:54:33.962280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.904 ms 00:28:21.459 [2024-12-06 06:54:33.962287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.459 [2024-12-06 06:54:33.962903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.459 [2024-12-06 06:54:33.962928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:21.459 [2024-12-06 06:54:33.962941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.538 ms 00:28:21.459 [2024-12-06 06:54:33.962948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.459 [2024-12-06 06:54:34.017099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.459 [2024-12-06 06:54:34.017308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:21.460 [2024-12-06 06:54:34.017333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.133 ms 00:28:21.460 [2024-12-06 06:54:34.017341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.460 [2024-12-06 06:54:34.028571] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:21.460 [2024-12-06 06:54:34.031098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.460 [2024-12-06 06:54:34.031128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:21.460 [2024-12-06 06:54:34.031141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.713 ms 00:28:21.460 [2024-12-06 06:54:34.031149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.460 [2024-12-06 06:54:34.031246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.460 [2024-12-06 06:54:34.031257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:21.460 [2024-12-06 06:54:34.031269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:21.460 [2024-12-06 06:54:34.031276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.460 [2024-12-06 06:54:34.031341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.460 [2024-12-06 06:54:34.031351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:21.460 [2024-12-06 06:54:34.031360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:28:21.460 [2024-12-06 06:54:34.031367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.460 [2024-12-06 06:54:34.031401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.460 [2024-12-06 06:54:34.031410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:21.460 [2024-12-06 06:54:34.031418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:28:21.460 [2024-12-06 06:54:34.031425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.460 [2024-12-06 06:54:34.031458] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:21.460 [2024-12-06 06:54:34.031484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.460 [2024-12-06 06:54:34.031492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:21.460 [2024-12-06 06:54:34.031500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:28:21.460 [2024-12-06 06:54:34.031508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.460 [2024-12-06 06:54:34.054510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.460 [2024-12-06 06:54:34.054543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:21.460 [2024-12-06 06:54:34.054558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.984 ms 00:28:21.460 [2024-12-06 06:54:34.054566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.460 [2024-12-06 06:54:34.054630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.460 [2024-12-06 06:54:34.054639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:21.460 [2024-12-06 06:54:34.054648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:28:21.460 [2024-12-06 06:54:34.054655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.460 [2024-12-06 06:54:34.055623] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 260.453 ms, result 0 00:28:22.391  [2024-12-06T06:54:36.502Z] Copying: 39/1024 [MB] (39 MBps) [2024-12-06T06:54:37.070Z] Copying: 75/1024 [MB] (36 MBps) [2024-12-06T06:54:38.457Z] Copying: 112/1024 [MB] (36 MBps) [2024-12-06T06:54:39.401Z] Copying: 138/1024 [MB] (25 MBps) [2024-12-06T06:54:40.345Z] Copying: 175/1024 [MB] (37 MBps) [2024-12-06T06:54:41.289Z] Copying: 213/1024 [MB] (37 MBps) [2024-12-06T06:54:42.234Z] Copying: 253/1024 [MB] (39 MBps) [2024-12-06T06:54:43.175Z] Copying: 283/1024 [MB] (29 MBps) [2024-12-06T06:54:44.120Z] Copying: 316/1024 [MB] (33 MBps) [2024-12-06T06:54:45.503Z] Copying: 350/1024 [MB] (33 MBps) [2024-12-06T06:54:46.074Z] Copying: 386/1024 [MB] (36 MBps) [2024-12-06T06:54:47.476Z] Copying: 416/1024 [MB] (29 MBps) [2024-12-06T06:54:48.420Z] Copying: 450/1024 [MB] (34 MBps) [2024-12-06T06:54:49.358Z] Copying: 480/1024 [MB] (30 MBps) [2024-12-06T06:54:50.298Z] Copying: 521/1024 [MB] (40 MBps) [2024-12-06T06:54:51.241Z] Copying: 553/1024 [MB] (31 MBps) [2024-12-06T06:54:52.182Z] Copying: 580/1024 [MB] (27 MBps) [2024-12-06T06:54:53.182Z] Copying: 605/1024 [MB] (25 MBps) [2024-12-06T06:54:54.125Z] Copying: 640/1024 [MB] (34 MBps) [2024-12-06T06:54:55.504Z] Copying: 673/1024 [MB] (33 MBps) [2024-12-06T06:54:56.076Z] Copying: 714/1024 [MB] (41 MBps) [2024-12-06T06:54:57.463Z] Copying: 754/1024 [MB] (39 MBps) [2024-12-06T06:54:58.412Z] Copying: 794/1024 [MB] (40 MBps) [2024-12-06T06:54:59.362Z] Copying: 835/1024 [MB] (40 MBps) [2024-12-06T06:55:00.304Z] Copying: 876/1024 [MB] (41 MBps) [2024-12-06T06:55:01.249Z] Copying: 910/1024 [MB] (34 MBps) [2024-12-06T06:55:02.193Z] Copying: 948/1024 [MB] (38 MBps) [2024-12-06T06:55:03.137Z] Copying: 986/1024 [MB] (37 MBps) [2024-12-06T06:55:03.137Z] Copying: 1024/1024 [MB] (average 35 MBps)[2024-12-06 06:55:02.956416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.396 [2024-12-06 06:55:02.956487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:50.396 [2024-12-06 06:55:02.956502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:50.396 [2024-12-06 06:55:02.956512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.396 [2024-12-06 06:55:02.956533] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:50.396 [2024-12-06 06:55:02.959319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.396 [2024-12-06 06:55:02.959525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:50.396 [2024-12-06 06:55:02.959543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.770 ms 00:28:50.396 [2024-12-06 06:55:02.959553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.396 [2024-12-06 06:55:02.961065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.396 [2024-12-06 06:55:02.961092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:50.397 [2024-12-06 06:55:02.961102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.489 ms 00:28:50.397 [2024-12-06 06:55:02.961110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.397 [2024-12-06 06:55:02.976304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.397 [2024-12-06 06:55:02.976338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:50.397 [2024-12-06 06:55:02.976349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.178 ms 00:28:50.397 [2024-12-06 06:55:02.976362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.397 [2024-12-06 06:55:02.982531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.397 [2024-12-06 06:55:02.982675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:50.397 [2024-12-06 06:55:02.982690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.138 ms 00:28:50.397 [2024-12-06 06:55:02.982698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.397 [2024-12-06 06:55:03.006850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.397 [2024-12-06 06:55:03.006883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:50.397 [2024-12-06 06:55:03.006894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.101 ms 00:28:50.397 [2024-12-06 06:55:03.006902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.397 [2024-12-06 06:55:03.020991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.397 [2024-12-06 06:55:03.021023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:50.397 [2024-12-06 06:55:03.021035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.056 ms 00:28:50.397 [2024-12-06 06:55:03.021044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.397 [2024-12-06 06:55:03.021189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.397 [2024-12-06 06:55:03.021201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:50.397 [2024-12-06 06:55:03.021210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:28:50.397 [2024-12-06 06:55:03.021218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.397 [2024-12-06 06:55:03.043863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.397 [2024-12-06 06:55:03.043895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:50.397 [2024-12-06 06:55:03.043906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.631 ms 00:28:50.397 [2024-12-06 06:55:03.043915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.397 [2024-12-06 06:55:03.066547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.397 [2024-12-06 06:55:03.066577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:50.397 [2024-12-06 06:55:03.066588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.600 ms 00:28:50.397 [2024-12-06 06:55:03.066596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.397 [2024-12-06 06:55:03.088139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.397 [2024-12-06 06:55:03.088171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:50.397 [2024-12-06 06:55:03.088181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.510 ms 00:28:50.397 [2024-12-06 06:55:03.088190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.397 [2024-12-06 06:55:03.109773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.397 [2024-12-06 06:55:03.109909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:50.397 [2024-12-06 06:55:03.109925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.529 ms 00:28:50.397 [2024-12-06 06:55:03.109932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.397 [2024-12-06 06:55:03.109960] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:50.397 [2024-12-06 06:55:03.109980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:50.397 [2024-12-06 06:55:03.109993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:50.397 [2024-12-06 06:55:03.110001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:50.397 [2024-12-06 06:55:03.110009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:50.397 [2024-12-06 06:55:03.110017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:50.397 [2024-12-06 06:55:03.110024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:50.397 [2024-12-06 06:55:03.110032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:50.397 [2024-12-06 06:55:03.110040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:50.397 [2024-12-06 06:55:03.110048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:50.397 [2024-12-06 06:55:03.110055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:50.397 [2024-12-06 06:55:03.110062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:50.397 [2024-12-06 06:55:03.110070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:50.397 [2024-12-06 06:55:03.110078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:50.397 [2024-12-06 06:55:03.110085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:50.397 [2024-12-06 06:55:03.110093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:50.397 [2024-12-06 06:55:03.110100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:50.397 [2024-12-06 06:55:03.110107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:50.397 [2024-12-06 06:55:03.110116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:50.397 [2024-12-06 06:55:03.110124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:50.397 [2024-12-06 06:55:03.110131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:50.397 [2024-12-06 06:55:03.110138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:50.397 [2024-12-06 06:55:03.110145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:50.397 [2024-12-06 06:55:03.110153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:50.397 [2024-12-06 06:55:03.110160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:50.397 [2024-12-06 06:55:03.110167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:50.397 [2024-12-06 06:55:03.110174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:50.397 [2024-12-06 06:55:03.110182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:50.397 [2024-12-06 06:55:03.110192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:50.397 [2024-12-06 06:55:03.110200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:50.397 [2024-12-06 06:55:03.110207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:50.397 [2024-12-06 06:55:03.110215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:50.397 [2024-12-06 06:55:03.110222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:50.397 [2024-12-06 06:55:03.110231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:50.397 [2024-12-06 06:55:03.110239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:50.397 [2024-12-06 06:55:03.110247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:50.397 [2024-12-06 06:55:03.110254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:50.397 [2024-12-06 06:55:03.110262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:50.397 [2024-12-06 06:55:03.110269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:50.397 [2024-12-06 06:55:03.110276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:50.397 [2024-12-06 06:55:03.110284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:50.397 [2024-12-06 06:55:03.110291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:50.397 [2024-12-06 06:55:03.110298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:50.397 [2024-12-06 06:55:03.110305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:50.397 [2024-12-06 06:55:03.110314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:50.397 [2024-12-06 06:55:03.110322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:50.397 [2024-12-06 06:55:03.110329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:50.397 [2024-12-06 06:55:03.110336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:50.397 [2024-12-06 06:55:03.110343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:50.397 [2024-12-06 06:55:03.110351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:50.397 [2024-12-06 06:55:03.110358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:50.397 [2024-12-06 06:55:03.110365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:50.397 [2024-12-06 06:55:03.110373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:50.397 [2024-12-06 06:55:03.110381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:50.397 [2024-12-06 06:55:03.110388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:50.398 [2024-12-06 06:55:03.110395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:50.398 [2024-12-06 06:55:03.110403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:50.398 [2024-12-06 06:55:03.110411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:50.398 [2024-12-06 06:55:03.110418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:50.398 [2024-12-06 06:55:03.110426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:50.398 [2024-12-06 06:55:03.110435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:50.398 [2024-12-06 06:55:03.110442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:50.398 [2024-12-06 06:55:03.110450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:50.398 [2024-12-06 06:55:03.110457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:50.398 [2024-12-06 06:55:03.110480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:50.398 [2024-12-06 06:55:03.110488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:50.398 [2024-12-06 06:55:03.110495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:50.398 [2024-12-06 06:55:03.110503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:50.398 [2024-12-06 06:55:03.110510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:50.398 [2024-12-06 06:55:03.110518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:50.398 [2024-12-06 06:55:03.110525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:50.398 [2024-12-06 06:55:03.110534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:50.398 [2024-12-06 06:55:03.110541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:50.398 [2024-12-06 06:55:03.110548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:50.398 [2024-12-06 06:55:03.110555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:50.398 [2024-12-06 06:55:03.110565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:50.398 [2024-12-06 06:55:03.110572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:50.398 [2024-12-06 06:55:03.110583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:50.398 [2024-12-06 06:55:03.110591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:50.398 [2024-12-06 06:55:03.110598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:50.398 [2024-12-06 06:55:03.110605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:50.398 [2024-12-06 06:55:03.110613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:50.398 [2024-12-06 06:55:03.110620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:50.398 [2024-12-06 06:55:03.110628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:50.398 [2024-12-06 06:55:03.110635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:50.398 [2024-12-06 06:55:03.110642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:50.398 [2024-12-06 06:55:03.110650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:50.398 [2024-12-06 06:55:03.110657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:50.398 [2024-12-06 06:55:03.110664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:50.398 [2024-12-06 06:55:03.110671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:50.398 [2024-12-06 06:55:03.110679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:50.398 [2024-12-06 06:55:03.110686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:50.398 [2024-12-06 06:55:03.110695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:50.398 [2024-12-06 06:55:03.110702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:50.398 [2024-12-06 06:55:03.110709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:50.398 [2024-12-06 06:55:03.110716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:50.398 [2024-12-06 06:55:03.110724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:50.398 [2024-12-06 06:55:03.110732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:50.398 [2024-12-06 06:55:03.110739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:50.398 [2024-12-06 06:55:03.110747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:50.398 [2024-12-06 06:55:03.110754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:50.398 [2024-12-06 06:55:03.110769] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:50.398 [2024-12-06 06:55:03.110777] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d910f68e-4869-4d1c-9d11-c1e41849a5be 00:28:50.398 [2024-12-06 06:55:03.110785] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:50.398 [2024-12-06 06:55:03.110792] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:50.398 [2024-12-06 06:55:03.110799] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:50.398 [2024-12-06 06:55:03.110806] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:50.398 [2024-12-06 06:55:03.110821] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:50.398 [2024-12-06 06:55:03.110828] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:50.398 [2024-12-06 06:55:03.110836] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:50.398 [2024-12-06 06:55:03.110842] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:50.398 [2024-12-06 06:55:03.110849] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:50.398 [2024-12-06 06:55:03.110856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.398 [2024-12-06 06:55:03.110864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:50.398 [2024-12-06 06:55:03.110873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.897 ms 00:28:50.398 [2024-12-06 06:55:03.110883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.398 [2024-12-06 06:55:03.123559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.398 [2024-12-06 06:55:03.123588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:50.398 [2024-12-06 06:55:03.123599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.661 ms 00:28:50.398 [2024-12-06 06:55:03.123608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.398 [2024-12-06 06:55:03.123964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.398 [2024-12-06 06:55:03.123973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:50.398 [2024-12-06 06:55:03.123985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.338 ms 00:28:50.398 [2024-12-06 06:55:03.123993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.660 [2024-12-06 06:55:03.159024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:50.660 [2024-12-06 06:55:03.159056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:50.660 [2024-12-06 06:55:03.159067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:50.660 [2024-12-06 06:55:03.159076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.660 [2024-12-06 06:55:03.159132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:50.660 [2024-12-06 06:55:03.159144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:50.660 [2024-12-06 06:55:03.159157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:50.660 [2024-12-06 06:55:03.159166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.660 [2024-12-06 06:55:03.159224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:50.660 [2024-12-06 06:55:03.159234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:50.660 [2024-12-06 06:55:03.159244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:50.660 [2024-12-06 06:55:03.159253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.660 [2024-12-06 06:55:03.159269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:50.660 [2024-12-06 06:55:03.159278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:50.660 [2024-12-06 06:55:03.159286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:50.660 [2024-12-06 06:55:03.159299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.660 [2024-12-06 06:55:03.241224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:50.660 [2024-12-06 06:55:03.241419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:50.660 [2024-12-06 06:55:03.241437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:50.660 [2024-12-06 06:55:03.241446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.660 [2024-12-06 06:55:03.308295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:50.660 [2024-12-06 06:55:03.308479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:50.660 [2024-12-06 06:55:03.308502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:50.660 [2024-12-06 06:55:03.308510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.660 [2024-12-06 06:55:03.308582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:50.660 [2024-12-06 06:55:03.308592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:50.660 [2024-12-06 06:55:03.308600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:50.660 [2024-12-06 06:55:03.308608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.660 [2024-12-06 06:55:03.308645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:50.660 [2024-12-06 06:55:03.308654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:50.660 [2024-12-06 06:55:03.308662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:50.660 [2024-12-06 06:55:03.308670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.660 [2024-12-06 06:55:03.308765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:50.660 [2024-12-06 06:55:03.308775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:50.660 [2024-12-06 06:55:03.308785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:50.660 [2024-12-06 06:55:03.308793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.660 [2024-12-06 06:55:03.308823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:50.660 [2024-12-06 06:55:03.308833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:50.660 [2024-12-06 06:55:03.308841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:50.660 [2024-12-06 06:55:03.308849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.660 [2024-12-06 06:55:03.308890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:50.660 [2024-12-06 06:55:03.308900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:50.660 [2024-12-06 06:55:03.308908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:50.660 [2024-12-06 06:55:03.308916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.660 [2024-12-06 06:55:03.308960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:50.660 [2024-12-06 06:55:03.308970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:50.660 [2024-12-06 06:55:03.308979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:50.660 [2024-12-06 06:55:03.308986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.660 [2024-12-06 06:55:03.309109] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 352.661 ms, result 0 00:28:53.204 00:28:53.204 00:28:53.204 06:55:05 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:28:53.204 [2024-12-06 06:55:05.838932] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:28:53.204 [2024-12-06 06:55:05.839053] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78782 ] 00:28:53.464 [2024-12-06 06:55:05.998275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:53.464 [2024-12-06 06:55:06.096942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:53.724 [2024-12-06 06:55:06.353620] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:53.724 [2024-12-06 06:55:06.353682] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:53.986 [2024-12-06 06:55:06.507718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.986 [2024-12-06 06:55:06.507886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:53.986 [2024-12-06 06:55:06.507906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:53.986 [2024-12-06 06:55:06.507915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.986 [2024-12-06 06:55:06.507964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.986 [2024-12-06 06:55:06.507976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:53.986 [2024-12-06 06:55:06.507984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:28:53.986 [2024-12-06 06:55:06.507991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.986 [2024-12-06 06:55:06.508011] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:53.986 [2024-12-06 06:55:06.508723] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:53.986 [2024-12-06 06:55:06.508750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.986 [2024-12-06 06:55:06.508758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:53.986 [2024-12-06 06:55:06.508767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.744 ms 00:28:53.986 [2024-12-06 06:55:06.508774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.986 [2024-12-06 06:55:06.509805] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:53.986 [2024-12-06 06:55:06.521928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.986 [2024-12-06 06:55:06.521961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:53.986 [2024-12-06 06:55:06.521973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.125 ms 00:28:53.986 [2024-12-06 06:55:06.521982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.986 [2024-12-06 06:55:06.522036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.986 [2024-12-06 06:55:06.522046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:53.986 [2024-12-06 06:55:06.522054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:28:53.986 [2024-12-06 06:55:06.522061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.986 [2024-12-06 06:55:06.526774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.986 [2024-12-06 06:55:06.526895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:53.986 [2024-12-06 06:55:06.526909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.658 ms 00:28:53.986 [2024-12-06 06:55:06.526921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.986 [2024-12-06 06:55:06.526987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.986 [2024-12-06 06:55:06.526996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:53.986 [2024-12-06 06:55:06.527004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:28:53.986 [2024-12-06 06:55:06.527011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.986 [2024-12-06 06:55:06.527050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.986 [2024-12-06 06:55:06.527060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:53.986 [2024-12-06 06:55:06.527067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:53.986 [2024-12-06 06:55:06.527074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.986 [2024-12-06 06:55:06.527096] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:53.986 [2024-12-06 06:55:06.530301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.986 [2024-12-06 06:55:06.530400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:53.986 [2024-12-06 06:55:06.530416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.210 ms 00:28:53.987 [2024-12-06 06:55:06.530424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.987 [2024-12-06 06:55:06.530456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.987 [2024-12-06 06:55:06.530477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:53.987 [2024-12-06 06:55:06.530486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:53.987 [2024-12-06 06:55:06.530493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.987 [2024-12-06 06:55:06.530511] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:53.987 [2024-12-06 06:55:06.530529] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:53.987 [2024-12-06 06:55:06.530562] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:53.987 [2024-12-06 06:55:06.530579] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:53.987 [2024-12-06 06:55:06.530679] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:53.987 [2024-12-06 06:55:06.530690] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:53.987 [2024-12-06 06:55:06.530702] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:53.987 [2024-12-06 06:55:06.530711] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:53.987 [2024-12-06 06:55:06.530719] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:53.987 [2024-12-06 06:55:06.530727] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:53.987 [2024-12-06 06:55:06.530735] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:53.987 [2024-12-06 06:55:06.530744] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:53.987 [2024-12-06 06:55:06.530752] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:53.987 [2024-12-06 06:55:06.530759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.987 [2024-12-06 06:55:06.530766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:53.987 [2024-12-06 06:55:06.530774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:28:53.987 [2024-12-06 06:55:06.530781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.987 [2024-12-06 06:55:06.530862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.987 [2024-12-06 06:55:06.530870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:53.987 [2024-12-06 06:55:06.530878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:28:53.987 [2024-12-06 06:55:06.530884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.987 [2024-12-06 06:55:06.530984] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:53.987 [2024-12-06 06:55:06.530994] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:53.987 [2024-12-06 06:55:06.531002] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:53.987 [2024-12-06 06:55:06.531009] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:53.987 [2024-12-06 06:55:06.531017] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:53.987 [2024-12-06 06:55:06.531023] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:53.987 [2024-12-06 06:55:06.531030] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:53.987 [2024-12-06 06:55:06.531037] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:53.987 [2024-12-06 06:55:06.531045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:53.987 [2024-12-06 06:55:06.531052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:53.987 [2024-12-06 06:55:06.531058] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:53.987 [2024-12-06 06:55:06.531065] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:53.987 [2024-12-06 06:55:06.531072] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:53.987 [2024-12-06 06:55:06.531083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:53.987 [2024-12-06 06:55:06.531091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:53.987 [2024-12-06 06:55:06.531098] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:53.987 [2024-12-06 06:55:06.531104] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:53.987 [2024-12-06 06:55:06.531111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:53.987 [2024-12-06 06:55:06.531117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:53.987 [2024-12-06 06:55:06.531124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:53.987 [2024-12-06 06:55:06.531130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:53.987 [2024-12-06 06:55:06.531136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:53.987 [2024-12-06 06:55:06.531143] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:53.987 [2024-12-06 06:55:06.531149] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:53.987 [2024-12-06 06:55:06.531155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:53.987 [2024-12-06 06:55:06.531162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:53.987 [2024-12-06 06:55:06.531168] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:53.987 [2024-12-06 06:55:06.531174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:53.987 [2024-12-06 06:55:06.531181] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:53.987 [2024-12-06 06:55:06.531187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:53.987 [2024-12-06 06:55:06.531193] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:53.987 [2024-12-06 06:55:06.531200] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:53.987 [2024-12-06 06:55:06.531206] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:53.987 [2024-12-06 06:55:06.531212] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:53.987 [2024-12-06 06:55:06.531219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:53.987 [2024-12-06 06:55:06.531225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:53.987 [2024-12-06 06:55:06.531231] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:53.987 [2024-12-06 06:55:06.531238] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:53.987 [2024-12-06 06:55:06.531244] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:53.987 [2024-12-06 06:55:06.531250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:53.987 [2024-12-06 06:55:06.531257] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:53.987 [2024-12-06 06:55:06.531263] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:53.987 [2024-12-06 06:55:06.531269] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:53.987 [2024-12-06 06:55:06.531276] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:53.987 [2024-12-06 06:55:06.531283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:53.987 [2024-12-06 06:55:06.531290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:53.987 [2024-12-06 06:55:06.531299] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:53.987 [2024-12-06 06:55:06.531306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:53.987 [2024-12-06 06:55:06.531313] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:53.987 [2024-12-06 06:55:06.531320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:53.987 [2024-12-06 06:55:06.531326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:53.987 [2024-12-06 06:55:06.531333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:53.987 [2024-12-06 06:55:06.531340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:53.987 [2024-12-06 06:55:06.531347] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:53.987 [2024-12-06 06:55:06.531355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:53.987 [2024-12-06 06:55:06.531365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:53.987 [2024-12-06 06:55:06.531373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:53.987 [2024-12-06 06:55:06.531380] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:53.987 [2024-12-06 06:55:06.531404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:53.987 [2024-12-06 06:55:06.531411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:53.987 [2024-12-06 06:55:06.531418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:53.987 [2024-12-06 06:55:06.531425] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:53.987 [2024-12-06 06:55:06.531433] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:53.987 [2024-12-06 06:55:06.531440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:53.987 [2024-12-06 06:55:06.531447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:53.987 [2024-12-06 06:55:06.531454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:53.987 [2024-12-06 06:55:06.531479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:53.987 [2024-12-06 06:55:06.531487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:53.987 [2024-12-06 06:55:06.531494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:53.987 [2024-12-06 06:55:06.531502] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:53.987 [2024-12-06 06:55:06.531510] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:53.987 [2024-12-06 06:55:06.531518] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:53.988 [2024-12-06 06:55:06.531525] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:53.988 [2024-12-06 06:55:06.531532] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:53.988 [2024-12-06 06:55:06.531540] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:53.988 [2024-12-06 06:55:06.531547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.988 [2024-12-06 06:55:06.531554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:53.988 [2024-12-06 06:55:06.531562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.632 ms 00:28:53.988 [2024-12-06 06:55:06.531569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.988 [2024-12-06 06:55:06.557234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.988 [2024-12-06 06:55:06.557341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:53.988 [2024-12-06 06:55:06.557391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.608 ms 00:28:53.988 [2024-12-06 06:55:06.557418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.988 [2024-12-06 06:55:06.557519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.988 [2024-12-06 06:55:06.557542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:53.988 [2024-12-06 06:55:06.557561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:28:53.988 [2024-12-06 06:55:06.557612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.988 [2024-12-06 06:55:06.600753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.988 [2024-12-06 06:55:06.600887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:53.988 [2024-12-06 06:55:06.600948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.075 ms 00:28:53.988 [2024-12-06 06:55:06.600972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.988 [2024-12-06 06:55:06.601022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.988 [2024-12-06 06:55:06.601046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:53.988 [2024-12-06 06:55:06.601071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:53.988 [2024-12-06 06:55:06.601155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.988 [2024-12-06 06:55:06.601532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.988 [2024-12-06 06:55:06.601671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:53.988 [2024-12-06 06:55:06.601729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:28:53.988 [2024-12-06 06:55:06.601751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.988 [2024-12-06 06:55:06.601914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.988 [2024-12-06 06:55:06.601970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:53.988 [2024-12-06 06:55:06.602021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:28:53.988 [2024-12-06 06:55:06.602074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.988 [2024-12-06 06:55:06.614989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.988 [2024-12-06 06:55:06.615088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:53.988 [2024-12-06 06:55:06.615165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.882 ms 00:28:53.988 [2024-12-06 06:55:06.615188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.988 [2024-12-06 06:55:06.627420] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 3, empty chunks = 1 00:28:53.988 [2024-12-06 06:55:06.627551] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:53.988 [2024-12-06 06:55:06.627620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.988 [2024-12-06 06:55:06.627641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:53.988 [2024-12-06 06:55:06.627661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.330 ms 00:28:53.988 [2024-12-06 06:55:06.627679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.988 [2024-12-06 06:55:06.651683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.988 [2024-12-06 06:55:06.651789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:53.988 [2024-12-06 06:55:06.651839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.962 ms 00:28:53.988 [2024-12-06 06:55:06.651862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.988 [2024-12-06 06:55:06.663247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.988 [2024-12-06 06:55:06.663341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:53.988 [2024-12-06 06:55:06.663407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.340 ms 00:28:53.988 [2024-12-06 06:55:06.663430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.988 [2024-12-06 06:55:06.674420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.988 [2024-12-06 06:55:06.674535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:53.988 [2024-12-06 06:55:06.674588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.866 ms 00:28:53.988 [2024-12-06 06:55:06.674610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:53.988 [2024-12-06 06:55:06.675218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:53.988 [2024-12-06 06:55:06.675304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:53.988 [2024-12-06 06:55:06.675357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.522 ms 00:28:53.988 [2024-12-06 06:55:06.675379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.249 [2024-12-06 06:55:06.729408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.249 [2024-12-06 06:55:06.729583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:54.249 [2024-12-06 06:55:06.729652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.981 ms 00:28:54.249 [2024-12-06 06:55:06.729674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.249 [2024-12-06 06:55:06.739763] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:54.249 [2024-12-06 06:55:06.742187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.249 [2024-12-06 06:55:06.742280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:54.249 [2024-12-06 06:55:06.742328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.468 ms 00:28:54.249 [2024-12-06 06:55:06.742350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.249 [2024-12-06 06:55:06.742450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.249 [2024-12-06 06:55:06.742496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:54.249 [2024-12-06 06:55:06.742521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:54.249 [2024-12-06 06:55:06.742570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.249 [2024-12-06 06:55:06.742708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.249 [2024-12-06 06:55:06.742765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:54.249 [2024-12-06 06:55:06.742809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:28:54.249 [2024-12-06 06:55:06.742831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.249 [2024-12-06 06:55:06.742867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.249 [2024-12-06 06:55:06.742889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:54.249 [2024-12-06 06:55:06.743021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:54.249 [2024-12-06 06:55:06.743043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.249 [2024-12-06 06:55:06.743111] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:54.249 [2024-12-06 06:55:06.743136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.249 [2024-12-06 06:55:06.743155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:54.249 [2024-12-06 06:55:06.743175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:28:54.249 [2024-12-06 06:55:06.743193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.249 [2024-12-06 06:55:06.765802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.249 [2024-12-06 06:55:06.765909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:54.249 [2024-12-06 06:55:06.765972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.460 ms 00:28:54.249 [2024-12-06 06:55:06.765994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.249 [2024-12-06 06:55:06.766063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.249 [2024-12-06 06:55:06.766086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:54.249 [2024-12-06 06:55:06.766106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:28:54.249 [2024-12-06 06:55:06.766158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.249 [2024-12-06 06:55:06.767189] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 259.080 ms, result 0 00:28:55.638  [2024-12-06T06:55:09.322Z] Copying: 1008/1048576 [kB] (1008 kBps) [2024-12-06T06:55:10.266Z] Copying: 19/1024 [MB] (18 MBps) [2024-12-06T06:55:11.208Z] Copying: 36/1024 [MB] (17 MBps) [2024-12-06T06:55:12.150Z] Copying: 49/1024 [MB] (13 MBps) [2024-12-06T06:55:13.091Z] Copying: 66/1024 [MB] (17 MBps) [2024-12-06T06:55:14.034Z] Copying: 97/1024 [MB] (31 MBps) [2024-12-06T06:55:15.002Z] Copying: 122/1024 [MB] (24 MBps) [2024-12-06T06:55:16.378Z] Copying: 136/1024 [MB] (14 MBps) [2024-12-06T06:55:17.311Z] Copying: 154/1024 [MB] (18 MBps) [2024-12-06T06:55:18.269Z] Copying: 174/1024 [MB] (19 MBps) [2024-12-06T06:55:19.204Z] Copying: 202/1024 [MB] (28 MBps) [2024-12-06T06:55:20.138Z] Copying: 250/1024 [MB] (48 MBps) [2024-12-06T06:55:21.072Z] Copying: 297/1024 [MB] (46 MBps) [2024-12-06T06:55:22.005Z] Copying: 345/1024 [MB] (48 MBps) [2024-12-06T06:55:23.019Z] Copying: 392/1024 [MB] (46 MBps) [2024-12-06T06:55:23.952Z] Copying: 440/1024 [MB] (48 MBps) [2024-12-06T06:55:25.321Z] Copying: 490/1024 [MB] (49 MBps) [2024-12-06T06:55:26.252Z] Copying: 539/1024 [MB] (48 MBps) [2024-12-06T06:55:27.182Z] Copying: 588/1024 [MB] (48 MBps) [2024-12-06T06:55:28.112Z] Copying: 637/1024 [MB] (49 MBps) [2024-12-06T06:55:29.045Z] Copying: 684/1024 [MB] (47 MBps) [2024-12-06T06:55:29.999Z] Copying: 734/1024 [MB] (50 MBps) [2024-12-06T06:55:31.376Z] Copying: 782/1024 [MB] (48 MBps) [2024-12-06T06:55:32.310Z] Copying: 831/1024 [MB] (48 MBps) [2024-12-06T06:55:33.243Z] Copying: 879/1024 [MB] (48 MBps) [2024-12-06T06:55:34.177Z] Copying: 929/1024 [MB] (49 MBps) [2024-12-06T06:55:35.108Z] Copying: 976/1024 [MB] (47 MBps) [2024-12-06T06:55:35.108Z] Copying: 1019/1024 [MB] (42 MBps) [2024-12-06T06:55:35.108Z] Copying: 1024/1024 [MB] (average 36 MBps)[2024-12-06 06:55:35.050680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:22.367 [2024-12-06 06:55:35.050867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:22.367 [2024-12-06 06:55:35.050897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:22.367 [2024-12-06 06:55:35.050905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.367 [2024-12-06 06:55:35.050930] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:22.367 [2024-12-06 06:55:35.053514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:22.367 [2024-12-06 06:55:35.053547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:22.367 [2024-12-06 06:55:35.053559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.568 ms 00:29:22.367 [2024-12-06 06:55:35.053567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.367 [2024-12-06 06:55:35.054779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:22.367 [2024-12-06 06:55:35.054891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:22.367 [2024-12-06 06:55:35.054905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.191 ms 00:29:22.367 [2024-12-06 06:55:35.054918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.367 [2024-12-06 06:55:35.065445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:22.367 [2024-12-06 06:55:35.065497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:22.368 [2024-12-06 06:55:35.065509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.510 ms 00:29:22.368 [2024-12-06 06:55:35.065519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.368 [2024-12-06 06:55:35.071712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:22.368 [2024-12-06 06:55:35.071741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:22.368 [2024-12-06 06:55:35.071752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.165 ms 00:29:22.368 [2024-12-06 06:55:35.071765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.368 [2024-12-06 06:55:35.094861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:22.368 [2024-12-06 06:55:35.094896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:22.368 [2024-12-06 06:55:35.094908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.039 ms 00:29:22.368 [2024-12-06 06:55:35.094916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.626 [2024-12-06 06:55:35.108801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:22.626 [2024-12-06 06:55:35.108834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:22.626 [2024-12-06 06:55:35.108845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.853 ms 00:29:22.626 [2024-12-06 06:55:35.108854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.626 [2024-12-06 06:55:35.162140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:22.626 [2024-12-06 06:55:35.162303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:22.626 [2024-12-06 06:55:35.162323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.247 ms 00:29:22.626 [2024-12-06 06:55:35.162332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.626 [2024-12-06 06:55:35.185611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:22.626 [2024-12-06 06:55:35.185650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:22.626 [2024-12-06 06:55:35.185662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.259 ms 00:29:22.626 [2024-12-06 06:55:35.185670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.626 [2024-12-06 06:55:35.207814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:22.626 [2024-12-06 06:55:35.207849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:22.626 [2024-12-06 06:55:35.207861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.109 ms 00:29:22.626 [2024-12-06 06:55:35.207869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.626 [2024-12-06 06:55:35.229450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:22.626 [2024-12-06 06:55:35.229501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:22.626 [2024-12-06 06:55:35.229512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.548 ms 00:29:22.626 [2024-12-06 06:55:35.229520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.626 [2024-12-06 06:55:35.250985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:22.626 [2024-12-06 06:55:35.251017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:22.626 [2024-12-06 06:55:35.251027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.410 ms 00:29:22.626 [2024-12-06 06:55:35.251035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.626 [2024-12-06 06:55:35.251065] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:22.626 [2024-12-06 06:55:35.251080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131584 / 261120 wr_cnt: 1 state: open 00:29:22.626 [2024-12-06 06:55:35.251091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:22.626 [2024-12-06 06:55:35.251099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:22.626 [2024-12-06 06:55:35.251107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:22.626 [2024-12-06 06:55:35.251114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:22.626 [2024-12-06 06:55:35.251122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:22.626 [2024-12-06 06:55:35.251130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:22.626 [2024-12-06 06:55:35.251137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:22.626 [2024-12-06 06:55:35.251145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:22.626 [2024-12-06 06:55:35.251152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:22.626 [2024-12-06 06:55:35.251160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:22.626 [2024-12-06 06:55:35.251168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:22.626 [2024-12-06 06:55:35.251175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:22.626 [2024-12-06 06:55:35.251183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:22.626 [2024-12-06 06:55:35.251190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:22.626 [2024-12-06 06:55:35.251198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:22.626 [2024-12-06 06:55:35.251205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:22.626 [2024-12-06 06:55:35.251212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:22.626 [2024-12-06 06:55:35.251220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:22.626 [2024-12-06 06:55:35.251227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:22.626 [2024-12-06 06:55:35.251235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:22.626 [2024-12-06 06:55:35.251242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:22.626 [2024-12-06 06:55:35.251249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:22.626 [2024-12-06 06:55:35.251256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:22.626 [2024-12-06 06:55:35.251264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:22.626 [2024-12-06 06:55:35.251271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:22.626 [2024-12-06 06:55:35.251280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:22.626 [2024-12-06 06:55:35.251289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:22.626 [2024-12-06 06:55:35.251297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:22.626 [2024-12-06 06:55:35.251305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:22.626 [2024-12-06 06:55:35.251313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:22.626 [2024-12-06 06:55:35.251321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:22.626 [2024-12-06 06:55:35.251328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:22.626 [2024-12-06 06:55:35.251336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:22.626 [2024-12-06 06:55:35.251343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:22.626 [2024-12-06 06:55:35.251351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:22.626 [2024-12-06 06:55:35.251358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:22.626 [2024-12-06 06:55:35.251366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:22.626 [2024-12-06 06:55:35.251373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:22.626 [2024-12-06 06:55:35.251380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:22.626 [2024-12-06 06:55:35.251395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:22.626 [2024-12-06 06:55:35.251403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:22.626 [2024-12-06 06:55:35.251411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:22.626 [2024-12-06 06:55:35.251418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:22.626 [2024-12-06 06:55:35.251426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:22.626 [2024-12-06 06:55:35.251433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:22.626 [2024-12-06 06:55:35.251440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:22.626 [2024-12-06 06:55:35.251448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:22.626 [2024-12-06 06:55:35.251455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:22.627 [2024-12-06 06:55:35.251477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:22.627 [2024-12-06 06:55:35.251485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:22.627 [2024-12-06 06:55:35.251492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:22.627 [2024-12-06 06:55:35.251500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:22.627 [2024-12-06 06:55:35.251508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:22.627 [2024-12-06 06:55:35.251515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:22.627 [2024-12-06 06:55:35.251523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:22.627 [2024-12-06 06:55:35.251531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:22.627 [2024-12-06 06:55:35.251539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:22.627 [2024-12-06 06:55:35.251547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:22.627 [2024-12-06 06:55:35.251555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:22.627 [2024-12-06 06:55:35.251564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:22.627 [2024-12-06 06:55:35.251572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:22.627 [2024-12-06 06:55:35.251580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:22.627 [2024-12-06 06:55:35.251587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:22.627 [2024-12-06 06:55:35.251594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:22.627 [2024-12-06 06:55:35.251601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:22.627 [2024-12-06 06:55:35.251609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:22.627 [2024-12-06 06:55:35.251616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:22.627 [2024-12-06 06:55:35.251623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:22.627 [2024-12-06 06:55:35.251631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:22.627 [2024-12-06 06:55:35.251638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:22.627 [2024-12-06 06:55:35.251645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:22.627 [2024-12-06 06:55:35.251652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:22.627 [2024-12-06 06:55:35.251661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:22.627 [2024-12-06 06:55:35.251668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:22.627 [2024-12-06 06:55:35.251675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:22.627 [2024-12-06 06:55:35.251683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:22.627 [2024-12-06 06:55:35.251690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:22.627 [2024-12-06 06:55:35.251697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:22.627 [2024-12-06 06:55:35.251704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:22.627 [2024-12-06 06:55:35.251711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:22.627 [2024-12-06 06:55:35.251718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:22.627 [2024-12-06 06:55:35.251726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:22.627 [2024-12-06 06:55:35.251733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:22.627 [2024-12-06 06:55:35.251740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:22.627 [2024-12-06 06:55:35.251747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:22.627 [2024-12-06 06:55:35.251754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:22.627 [2024-12-06 06:55:35.251761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:22.627 [2024-12-06 06:55:35.251768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:22.627 [2024-12-06 06:55:35.251776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:22.627 [2024-12-06 06:55:35.251783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:22.627 [2024-12-06 06:55:35.251790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:22.627 [2024-12-06 06:55:35.251797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:22.627 [2024-12-06 06:55:35.251805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:22.627 [2024-12-06 06:55:35.251813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:22.627 [2024-12-06 06:55:35.251820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:22.627 [2024-12-06 06:55:35.251827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:22.627 [2024-12-06 06:55:35.251835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:22.627 [2024-12-06 06:55:35.251842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:22.627 [2024-12-06 06:55:35.251849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:22.627 [2024-12-06 06:55:35.251865] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:22.627 [2024-12-06 06:55:35.251873] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d910f68e-4869-4d1c-9d11-c1e41849a5be 00:29:22.627 [2024-12-06 06:55:35.251881] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131584 00:29:22.627 [2024-12-06 06:55:35.251888] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 132544 00:29:22.627 [2024-12-06 06:55:35.251895] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 131584 00:29:22.627 [2024-12-06 06:55:35.251903] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0073 00:29:22.627 [2024-12-06 06:55:35.251918] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:22.627 [2024-12-06 06:55:35.251932] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:22.627 [2024-12-06 06:55:35.251939] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:22.627 [2024-12-06 06:55:35.251946] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:22.627 [2024-12-06 06:55:35.251952] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:22.627 [2024-12-06 06:55:35.251959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:22.627 [2024-12-06 06:55:35.251966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:22.627 [2024-12-06 06:55:35.251975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.895 ms 00:29:22.627 [2024-12-06 06:55:35.251987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.627 [2024-12-06 06:55:35.264380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:22.627 [2024-12-06 06:55:35.264504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:22.627 [2024-12-06 06:55:35.264566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.377 ms 00:29:22.627 [2024-12-06 06:55:35.264620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.627 [2024-12-06 06:55:35.264971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:22.627 [2024-12-06 06:55:35.265038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:22.627 [2024-12-06 06:55:35.265083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:29:22.627 [2024-12-06 06:55:35.265104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.627 [2024-12-06 06:55:35.297368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:22.627 [2024-12-06 06:55:35.297518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:22.627 [2024-12-06 06:55:35.297577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:22.627 [2024-12-06 06:55:35.297622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.627 [2024-12-06 06:55:35.297703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:22.627 [2024-12-06 06:55:35.297746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:22.627 [2024-12-06 06:55:35.297788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:22.627 [2024-12-06 06:55:35.297810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.627 [2024-12-06 06:55:35.297924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:22.627 [2024-12-06 06:55:35.298020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:22.627 [2024-12-06 06:55:35.298064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:22.627 [2024-12-06 06:55:35.298085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.627 [2024-12-06 06:55:35.298113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:22.627 [2024-12-06 06:55:35.298133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:22.627 [2024-12-06 06:55:35.298188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:22.627 [2024-12-06 06:55:35.298210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.884 [2024-12-06 06:55:35.374221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:22.884 [2024-12-06 06:55:35.374374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:22.884 [2024-12-06 06:55:35.374421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:22.884 [2024-12-06 06:55:35.374442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.884 [2024-12-06 06:55:35.436228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:22.884 [2024-12-06 06:55:35.436380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:22.884 [2024-12-06 06:55:35.436432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:22.884 [2024-12-06 06:55:35.436454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.884 [2024-12-06 06:55:35.436540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:22.884 [2024-12-06 06:55:35.436614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:22.884 [2024-12-06 06:55:35.436637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:22.884 [2024-12-06 06:55:35.436661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.884 [2024-12-06 06:55:35.436725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:22.884 [2024-12-06 06:55:35.436793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:22.884 [2024-12-06 06:55:35.436813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:22.884 [2024-12-06 06:55:35.436831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.884 [2024-12-06 06:55:35.436960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:22.884 [2024-12-06 06:55:35.437013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:22.884 [2024-12-06 06:55:35.437057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:22.884 [2024-12-06 06:55:35.437079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.884 [2024-12-06 06:55:35.437218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:22.884 [2024-12-06 06:55:35.437234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:22.884 [2024-12-06 06:55:35.437243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:22.884 [2024-12-06 06:55:35.437251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.884 [2024-12-06 06:55:35.437285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:22.884 [2024-12-06 06:55:35.437293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:22.884 [2024-12-06 06:55:35.437301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:22.884 [2024-12-06 06:55:35.437308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.884 [2024-12-06 06:55:35.437353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:22.884 [2024-12-06 06:55:35.437362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:22.884 [2024-12-06 06:55:35.437370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:22.884 [2024-12-06 06:55:35.437378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.884 [2024-12-06 06:55:35.437496] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 386.777 ms, result 0 00:29:23.448 00:29:23.448 00:29:23.448 06:55:36 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:25.974 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:29:25.974 06:55:38 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:29:25.974 06:55:38 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:29:25.974 06:55:38 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:29:25.974 06:55:38 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:25.974 06:55:38 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:25.974 Process with pid 77580 is not found 00:29:25.974 Remove shared memory files 00:29:25.974 06:55:38 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 77580 00:29:25.974 06:55:38 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 77580 ']' 00:29:25.974 06:55:38 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 77580 00:29:25.974 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (77580) - No such process 00:29:25.974 06:55:38 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 77580 is not found' 00:29:25.974 06:55:38 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:29:25.974 06:55:38 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:25.974 06:55:38 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:29:25.974 06:55:38 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:29:25.974 06:55:38 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:29:25.974 06:55:38 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:25.975 06:55:38 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:29:25.975 ************************************ 00:29:25.975 END TEST ftl_restore 00:29:25.975 ************************************ 00:29:25.975 00:29:25.975 real 2m42.878s 00:29:25.975 user 2m31.413s 00:29:25.975 sys 0m11.921s 00:29:25.975 06:55:38 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:25.975 06:55:38 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:29:25.975 06:55:38 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:29:25.975 06:55:38 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:29:25.975 06:55:38 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:25.975 06:55:38 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:25.975 ************************************ 00:29:25.975 START TEST ftl_dirty_shutdown 00:29:25.975 ************************************ 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:29:25.975 * Looking for test storage... 00:29:25.975 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:25.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:25.975 --rc genhtml_branch_coverage=1 00:29:25.975 --rc genhtml_function_coverage=1 00:29:25.975 --rc genhtml_legend=1 00:29:25.975 --rc geninfo_all_blocks=1 00:29:25.975 --rc geninfo_unexecuted_blocks=1 00:29:25.975 00:29:25.975 ' 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:25.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:25.975 --rc genhtml_branch_coverage=1 00:29:25.975 --rc genhtml_function_coverage=1 00:29:25.975 --rc genhtml_legend=1 00:29:25.975 --rc geninfo_all_blocks=1 00:29:25.975 --rc geninfo_unexecuted_blocks=1 00:29:25.975 00:29:25.975 ' 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:25.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:25.975 --rc genhtml_branch_coverage=1 00:29:25.975 --rc genhtml_function_coverage=1 00:29:25.975 --rc genhtml_legend=1 00:29:25.975 --rc geninfo_all_blocks=1 00:29:25.975 --rc geninfo_unexecuted_blocks=1 00:29:25.975 00:29:25.975 ' 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:25.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:25.975 --rc genhtml_branch_coverage=1 00:29:25.975 --rc genhtml_function_coverage=1 00:29:25.975 --rc genhtml_legend=1 00:29:25.975 --rc geninfo_all_blocks=1 00:29:25.975 --rc geninfo_unexecuted_blocks=1 00:29:25.975 00:29:25.975 ' 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=79185 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 79185 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 79185 ']' 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:25.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:25.975 06:55:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:29:26.234 [2024-12-06 06:55:38.755967] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:29:26.234 [2024-12-06 06:55:38.756776] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79185 ] 00:29:26.234 [2024-12-06 06:55:38.919223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:26.491 [2024-12-06 06:55:39.019477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:27.061 06:55:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:27.061 06:55:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:27.061 06:55:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:29:27.061 06:55:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:29:27.061 06:55:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:29:27.061 06:55:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:29:27.061 06:55:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:29:27.061 06:55:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:29:27.323 06:55:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:29:27.323 06:55:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:29:27.323 06:55:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:29:27.323 06:55:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:29:27.323 06:55:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:27.323 06:55:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:27.323 06:55:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:27.323 06:55:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:29:27.580 06:55:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:27.580 { 00:29:27.580 "name": "nvme0n1", 00:29:27.580 "aliases": [ 00:29:27.580 "9b92c05f-fe6d-43b9-bd2a-afc8339b7b4d" 00:29:27.580 ], 00:29:27.580 "product_name": "NVMe disk", 00:29:27.580 "block_size": 4096, 00:29:27.580 "num_blocks": 1310720, 00:29:27.580 "uuid": "9b92c05f-fe6d-43b9-bd2a-afc8339b7b4d", 00:29:27.580 "numa_id": -1, 00:29:27.580 "assigned_rate_limits": { 00:29:27.580 "rw_ios_per_sec": 0, 00:29:27.580 "rw_mbytes_per_sec": 0, 00:29:27.580 "r_mbytes_per_sec": 0, 00:29:27.580 "w_mbytes_per_sec": 0 00:29:27.580 }, 00:29:27.580 "claimed": true, 00:29:27.580 "claim_type": "read_many_write_one", 00:29:27.580 "zoned": false, 00:29:27.580 "supported_io_types": { 00:29:27.580 "read": true, 00:29:27.580 "write": true, 00:29:27.580 "unmap": true, 00:29:27.580 "flush": true, 00:29:27.580 "reset": true, 00:29:27.580 "nvme_admin": true, 00:29:27.580 "nvme_io": true, 00:29:27.580 "nvme_io_md": false, 00:29:27.580 "write_zeroes": true, 00:29:27.580 "zcopy": false, 00:29:27.580 "get_zone_info": false, 00:29:27.580 "zone_management": false, 00:29:27.580 "zone_append": false, 00:29:27.580 "compare": true, 00:29:27.580 "compare_and_write": false, 00:29:27.580 "abort": true, 00:29:27.580 "seek_hole": false, 00:29:27.580 "seek_data": false, 00:29:27.580 "copy": true, 00:29:27.580 "nvme_iov_md": false 00:29:27.580 }, 00:29:27.580 "driver_specific": { 00:29:27.580 "nvme": [ 00:29:27.580 { 00:29:27.580 "pci_address": "0000:00:11.0", 00:29:27.580 "trid": { 00:29:27.580 "trtype": "PCIe", 00:29:27.580 "traddr": "0000:00:11.0" 00:29:27.580 }, 00:29:27.580 "ctrlr_data": { 00:29:27.580 "cntlid": 0, 00:29:27.580 "vendor_id": "0x1b36", 00:29:27.580 "model_number": "QEMU NVMe Ctrl", 00:29:27.580 "serial_number": "12341", 00:29:27.580 "firmware_revision": "8.0.0", 00:29:27.580 "subnqn": "nqn.2019-08.org.qemu:12341", 00:29:27.580 "oacs": { 00:29:27.580 "security": 0, 00:29:27.580 "format": 1, 00:29:27.580 "firmware": 0, 00:29:27.580 "ns_manage": 1 00:29:27.580 }, 00:29:27.580 "multi_ctrlr": false, 00:29:27.580 "ana_reporting": false 00:29:27.580 }, 00:29:27.580 "vs": { 00:29:27.580 "nvme_version": "1.4" 00:29:27.580 }, 00:29:27.580 "ns_data": { 00:29:27.580 "id": 1, 00:29:27.580 "can_share": false 00:29:27.580 } 00:29:27.580 } 00:29:27.580 ], 00:29:27.580 "mp_policy": "active_passive" 00:29:27.580 } 00:29:27.580 } 00:29:27.580 ]' 00:29:27.580 06:55:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:27.580 06:55:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:27.580 06:55:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:27.580 06:55:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:29:27.580 06:55:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:29:27.580 06:55:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:29:27.580 06:55:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:29:27.580 06:55:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:29:27.580 06:55:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:29:27.580 06:55:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:27.580 06:55:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:27.837 06:55:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=0ee3844a-203b-4e4a-a874-b0f424608289 00:29:27.837 06:55:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:29:27.837 06:55:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0ee3844a-203b-4e4a-a874-b0f424608289 00:29:28.095 06:55:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:29:28.095 06:55:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=2e324760-d999-4d09-b76e-8459b659faa3 00:29:28.095 06:55:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 2e324760-d999-4d09-b76e-8459b659faa3 00:29:28.352 06:55:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=6627461d-783c-4bd1-be21-a2409390fcd8 00:29:28.352 06:55:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:29:28.352 06:55:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 6627461d-783c-4bd1-be21-a2409390fcd8 00:29:28.352 06:55:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:29:28.353 06:55:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:29:28.353 06:55:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=6627461d-783c-4bd1-be21-a2409390fcd8 00:29:28.353 06:55:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:29:28.353 06:55:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 6627461d-783c-4bd1-be21-a2409390fcd8 00:29:28.353 06:55:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=6627461d-783c-4bd1-be21-a2409390fcd8 00:29:28.353 06:55:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:28.353 06:55:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:28.353 06:55:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:28.353 06:55:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6627461d-783c-4bd1-be21-a2409390fcd8 00:29:28.610 06:55:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:28.610 { 00:29:28.610 "name": "6627461d-783c-4bd1-be21-a2409390fcd8", 00:29:28.610 "aliases": [ 00:29:28.610 "lvs/nvme0n1p0" 00:29:28.610 ], 00:29:28.610 "product_name": "Logical Volume", 00:29:28.610 "block_size": 4096, 00:29:28.610 "num_blocks": 26476544, 00:29:28.610 "uuid": "6627461d-783c-4bd1-be21-a2409390fcd8", 00:29:28.610 "assigned_rate_limits": { 00:29:28.610 "rw_ios_per_sec": 0, 00:29:28.610 "rw_mbytes_per_sec": 0, 00:29:28.610 "r_mbytes_per_sec": 0, 00:29:28.610 "w_mbytes_per_sec": 0 00:29:28.610 }, 00:29:28.610 "claimed": false, 00:29:28.610 "zoned": false, 00:29:28.610 "supported_io_types": { 00:29:28.610 "read": true, 00:29:28.610 "write": true, 00:29:28.610 "unmap": true, 00:29:28.610 "flush": false, 00:29:28.610 "reset": true, 00:29:28.610 "nvme_admin": false, 00:29:28.610 "nvme_io": false, 00:29:28.610 "nvme_io_md": false, 00:29:28.610 "write_zeroes": true, 00:29:28.610 "zcopy": false, 00:29:28.610 "get_zone_info": false, 00:29:28.610 "zone_management": false, 00:29:28.610 "zone_append": false, 00:29:28.611 "compare": false, 00:29:28.611 "compare_and_write": false, 00:29:28.611 "abort": false, 00:29:28.611 "seek_hole": true, 00:29:28.611 "seek_data": true, 00:29:28.611 "copy": false, 00:29:28.611 "nvme_iov_md": false 00:29:28.611 }, 00:29:28.611 "driver_specific": { 00:29:28.611 "lvol": { 00:29:28.611 "lvol_store_uuid": "2e324760-d999-4d09-b76e-8459b659faa3", 00:29:28.611 "base_bdev": "nvme0n1", 00:29:28.611 "thin_provision": true, 00:29:28.611 "num_allocated_clusters": 0, 00:29:28.611 "snapshot": false, 00:29:28.611 "clone": false, 00:29:28.611 "esnap_clone": false 00:29:28.611 } 00:29:28.611 } 00:29:28.611 } 00:29:28.611 ]' 00:29:28.611 06:55:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:28.611 06:55:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:28.611 06:55:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:28.611 06:55:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:29:28.611 06:55:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:29:28.611 06:55:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:29:28.611 06:55:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:29:28.611 06:55:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:29:28.611 06:55:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:29:28.869 06:55:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:29:28.869 06:55:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:29:28.869 06:55:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 6627461d-783c-4bd1-be21-a2409390fcd8 00:29:28.869 06:55:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=6627461d-783c-4bd1-be21-a2409390fcd8 00:29:28.869 06:55:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:28.869 06:55:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:28.869 06:55:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:28.869 06:55:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6627461d-783c-4bd1-be21-a2409390fcd8 00:29:29.127 06:55:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:29.127 { 00:29:29.127 "name": "6627461d-783c-4bd1-be21-a2409390fcd8", 00:29:29.127 "aliases": [ 00:29:29.127 "lvs/nvme0n1p0" 00:29:29.127 ], 00:29:29.127 "product_name": "Logical Volume", 00:29:29.127 "block_size": 4096, 00:29:29.127 "num_blocks": 26476544, 00:29:29.127 "uuid": "6627461d-783c-4bd1-be21-a2409390fcd8", 00:29:29.127 "assigned_rate_limits": { 00:29:29.127 "rw_ios_per_sec": 0, 00:29:29.127 "rw_mbytes_per_sec": 0, 00:29:29.127 "r_mbytes_per_sec": 0, 00:29:29.127 "w_mbytes_per_sec": 0 00:29:29.127 }, 00:29:29.127 "claimed": false, 00:29:29.127 "zoned": false, 00:29:29.127 "supported_io_types": { 00:29:29.127 "read": true, 00:29:29.127 "write": true, 00:29:29.127 "unmap": true, 00:29:29.127 "flush": false, 00:29:29.127 "reset": true, 00:29:29.127 "nvme_admin": false, 00:29:29.127 "nvme_io": false, 00:29:29.127 "nvme_io_md": false, 00:29:29.127 "write_zeroes": true, 00:29:29.127 "zcopy": false, 00:29:29.127 "get_zone_info": false, 00:29:29.127 "zone_management": false, 00:29:29.127 "zone_append": false, 00:29:29.127 "compare": false, 00:29:29.127 "compare_and_write": false, 00:29:29.127 "abort": false, 00:29:29.127 "seek_hole": true, 00:29:29.127 "seek_data": true, 00:29:29.127 "copy": false, 00:29:29.127 "nvme_iov_md": false 00:29:29.127 }, 00:29:29.127 "driver_specific": { 00:29:29.127 "lvol": { 00:29:29.127 "lvol_store_uuid": "2e324760-d999-4d09-b76e-8459b659faa3", 00:29:29.127 "base_bdev": "nvme0n1", 00:29:29.127 "thin_provision": true, 00:29:29.127 "num_allocated_clusters": 0, 00:29:29.127 "snapshot": false, 00:29:29.127 "clone": false, 00:29:29.127 "esnap_clone": false 00:29:29.127 } 00:29:29.127 } 00:29:29.127 } 00:29:29.127 ]' 00:29:29.127 06:55:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:29.127 06:55:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:29.127 06:55:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:29.127 06:55:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:29:29.127 06:55:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:29:29.127 06:55:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:29:29.127 06:55:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:29:29.127 06:55:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:29:29.385 06:55:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:29:29.385 06:55:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 6627461d-783c-4bd1-be21-a2409390fcd8 00:29:29.385 06:55:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=6627461d-783c-4bd1-be21-a2409390fcd8 00:29:29.385 06:55:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:29.385 06:55:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:29.385 06:55:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:29.385 06:55:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6627461d-783c-4bd1-be21-a2409390fcd8 00:29:29.643 06:55:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:29.643 { 00:29:29.643 "name": "6627461d-783c-4bd1-be21-a2409390fcd8", 00:29:29.643 "aliases": [ 00:29:29.643 "lvs/nvme0n1p0" 00:29:29.643 ], 00:29:29.643 "product_name": "Logical Volume", 00:29:29.643 "block_size": 4096, 00:29:29.643 "num_blocks": 26476544, 00:29:29.643 "uuid": "6627461d-783c-4bd1-be21-a2409390fcd8", 00:29:29.643 "assigned_rate_limits": { 00:29:29.643 "rw_ios_per_sec": 0, 00:29:29.643 "rw_mbytes_per_sec": 0, 00:29:29.643 "r_mbytes_per_sec": 0, 00:29:29.643 "w_mbytes_per_sec": 0 00:29:29.643 }, 00:29:29.643 "claimed": false, 00:29:29.643 "zoned": false, 00:29:29.643 "supported_io_types": { 00:29:29.643 "read": true, 00:29:29.643 "write": true, 00:29:29.643 "unmap": true, 00:29:29.643 "flush": false, 00:29:29.643 "reset": true, 00:29:29.643 "nvme_admin": false, 00:29:29.643 "nvme_io": false, 00:29:29.643 "nvme_io_md": false, 00:29:29.643 "write_zeroes": true, 00:29:29.643 "zcopy": false, 00:29:29.643 "get_zone_info": false, 00:29:29.643 "zone_management": false, 00:29:29.643 "zone_append": false, 00:29:29.643 "compare": false, 00:29:29.643 "compare_and_write": false, 00:29:29.643 "abort": false, 00:29:29.643 "seek_hole": true, 00:29:29.643 "seek_data": true, 00:29:29.643 "copy": false, 00:29:29.643 "nvme_iov_md": false 00:29:29.643 }, 00:29:29.643 "driver_specific": { 00:29:29.643 "lvol": { 00:29:29.643 "lvol_store_uuid": "2e324760-d999-4d09-b76e-8459b659faa3", 00:29:29.643 "base_bdev": "nvme0n1", 00:29:29.643 "thin_provision": true, 00:29:29.643 "num_allocated_clusters": 0, 00:29:29.643 "snapshot": false, 00:29:29.643 "clone": false, 00:29:29.643 "esnap_clone": false 00:29:29.643 } 00:29:29.643 } 00:29:29.643 } 00:29:29.643 ]' 00:29:29.643 06:55:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:29.643 06:55:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:29.643 06:55:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:29.643 06:55:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:29:29.643 06:55:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:29:29.643 06:55:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:29:29.644 06:55:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:29:29.644 06:55:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 6627461d-783c-4bd1-be21-a2409390fcd8 --l2p_dram_limit 10' 00:29:29.644 06:55:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:29:29.644 06:55:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:29:29.644 06:55:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:29:29.644 06:55:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 6627461d-783c-4bd1-be21-a2409390fcd8 --l2p_dram_limit 10 -c nvc0n1p0 00:29:29.902 [2024-12-06 06:55:42.440166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.902 [2024-12-06 06:55:42.440341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:29.902 [2024-12-06 06:55:42.440363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:29.902 [2024-12-06 06:55:42.440370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.902 [2024-12-06 06:55:42.440431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.902 [2024-12-06 06:55:42.440440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:29.902 [2024-12-06 06:55:42.440449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:29:29.902 [2024-12-06 06:55:42.440455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.902 [2024-12-06 06:55:42.440494] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:29.902 [2024-12-06 06:55:42.441099] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:29.902 [2024-12-06 06:55:42.441116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.902 [2024-12-06 06:55:42.441122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:29.902 [2024-12-06 06:55:42.441131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.629 ms 00:29:29.902 [2024-12-06 06:55:42.441137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.902 [2024-12-06 06:55:42.441190] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 1418b45b-9351-478f-9d9d-979a1b5eff85 00:29:29.902 [2024-12-06 06:55:42.442200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.902 [2024-12-06 06:55:42.442219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:29:29.902 [2024-12-06 06:55:42.442228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:29:29.902 [2024-12-06 06:55:42.442235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.902 [2024-12-06 06:55:42.447443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.902 [2024-12-06 06:55:42.447565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:29.902 [2024-12-06 06:55:42.447578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.170 ms 00:29:29.902 [2024-12-06 06:55:42.447586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.902 [2024-12-06 06:55:42.447657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.902 [2024-12-06 06:55:42.447666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:29.902 [2024-12-06 06:55:42.447673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:29:29.902 [2024-12-06 06:55:42.447683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.902 [2024-12-06 06:55:42.447715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.902 [2024-12-06 06:55:42.447723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:29.902 [2024-12-06 06:55:42.447731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:29.903 [2024-12-06 06:55:42.447739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.903 [2024-12-06 06:55:42.447755] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:29.903 [2024-12-06 06:55:42.450799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.903 [2024-12-06 06:55:42.450895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:29.903 [2024-12-06 06:55:42.450910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.046 ms 00:29:29.903 [2024-12-06 06:55:42.450917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.903 [2024-12-06 06:55:42.450947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.903 [2024-12-06 06:55:42.450954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:29.903 [2024-12-06 06:55:42.450962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:29:29.903 [2024-12-06 06:55:42.450968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.903 [2024-12-06 06:55:42.450989] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:29:29.903 [2024-12-06 06:55:42.451105] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:29.903 [2024-12-06 06:55:42.451118] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:29.903 [2024-12-06 06:55:42.451127] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:29.903 [2024-12-06 06:55:42.451136] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:29.903 [2024-12-06 06:55:42.451147] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:29.903 [2024-12-06 06:55:42.451156] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:29.903 [2024-12-06 06:55:42.451162] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:29.903 [2024-12-06 06:55:42.451171] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:29.903 [2024-12-06 06:55:42.451177] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:29.903 [2024-12-06 06:55:42.451184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.903 [2024-12-06 06:55:42.451195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:29.903 [2024-12-06 06:55:42.451203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.197 ms 00:29:29.903 [2024-12-06 06:55:42.451209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.903 [2024-12-06 06:55:42.451278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.903 [2024-12-06 06:55:42.451284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:29.903 [2024-12-06 06:55:42.451292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:29:29.903 [2024-12-06 06:55:42.451297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.903 [2024-12-06 06:55:42.451380] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:29.903 [2024-12-06 06:55:42.451398] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:29.903 [2024-12-06 06:55:42.451406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:29.903 [2024-12-06 06:55:42.451412] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:29.903 [2024-12-06 06:55:42.451419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:29.903 [2024-12-06 06:55:42.451425] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:29.903 [2024-12-06 06:55:42.451433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:29.903 [2024-12-06 06:55:42.451439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:29.903 [2024-12-06 06:55:42.451446] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:29.903 [2024-12-06 06:55:42.451451] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:29.903 [2024-12-06 06:55:42.451457] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:29.903 [2024-12-06 06:55:42.451476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:29.903 [2024-12-06 06:55:42.451484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:29.903 [2024-12-06 06:55:42.451490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:29.903 [2024-12-06 06:55:42.451497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:29.903 [2024-12-06 06:55:42.451502] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:29.903 [2024-12-06 06:55:42.451510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:29.903 [2024-12-06 06:55:42.451516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:29.903 [2024-12-06 06:55:42.451522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:29.903 [2024-12-06 06:55:42.451529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:29.903 [2024-12-06 06:55:42.451536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:29.903 [2024-12-06 06:55:42.451541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:29.903 [2024-12-06 06:55:42.451548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:29.903 [2024-12-06 06:55:42.451554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:29.903 [2024-12-06 06:55:42.451560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:29.903 [2024-12-06 06:55:42.451568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:29.903 [2024-12-06 06:55:42.451575] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:29.903 [2024-12-06 06:55:42.451580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:29.903 [2024-12-06 06:55:42.451587] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:29.903 [2024-12-06 06:55:42.451592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:29.903 [2024-12-06 06:55:42.451598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:29.903 [2024-12-06 06:55:42.451604] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:29.903 [2024-12-06 06:55:42.451613] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:29.903 [2024-12-06 06:55:42.451618] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:29.903 [2024-12-06 06:55:42.451625] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:29.903 [2024-12-06 06:55:42.451630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:29.903 [2024-12-06 06:55:42.451637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:29.903 [2024-12-06 06:55:42.451642] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:29.903 [2024-12-06 06:55:42.451648] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:29.903 [2024-12-06 06:55:42.451654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:29.903 [2024-12-06 06:55:42.451660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:29.903 [2024-12-06 06:55:42.451665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:29.903 [2024-12-06 06:55:42.451672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:29.903 [2024-12-06 06:55:42.451677] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:29.903 [2024-12-06 06:55:42.451684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:29.903 [2024-12-06 06:55:42.451691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:29.903 [2024-12-06 06:55:42.451697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:29.903 [2024-12-06 06:55:42.451703] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:29.903 [2024-12-06 06:55:42.451711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:29.903 [2024-12-06 06:55:42.451717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:29.903 [2024-12-06 06:55:42.451723] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:29.903 [2024-12-06 06:55:42.451730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:29.903 [2024-12-06 06:55:42.451737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:29.903 [2024-12-06 06:55:42.451744] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:29.903 [2024-12-06 06:55:42.451755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:29.903 [2024-12-06 06:55:42.451762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:29.903 [2024-12-06 06:55:42.451770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:29.903 [2024-12-06 06:55:42.451776] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:29.903 [2024-12-06 06:55:42.451784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:29.903 [2024-12-06 06:55:42.451790] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:29.903 [2024-12-06 06:55:42.451796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:29.903 [2024-12-06 06:55:42.451802] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:29.903 [2024-12-06 06:55:42.451809] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:29.903 [2024-12-06 06:55:42.451814] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:29.903 [2024-12-06 06:55:42.451822] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:29.903 [2024-12-06 06:55:42.451828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:29.903 [2024-12-06 06:55:42.451835] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:29.903 [2024-12-06 06:55:42.451841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:29.903 [2024-12-06 06:55:42.451848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:29.903 [2024-12-06 06:55:42.451853] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:29.903 [2024-12-06 06:55:42.451861] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:29.903 [2024-12-06 06:55:42.451867] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:29.903 [2024-12-06 06:55:42.451874] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:29.903 [2024-12-06 06:55:42.451880] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:29.903 [2024-12-06 06:55:42.451887] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:29.903 [2024-12-06 06:55:42.451893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.903 [2024-12-06 06:55:42.451900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:29.903 [2024-12-06 06:55:42.451906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.570 ms 00:29:29.903 [2024-12-06 06:55:42.451913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.903 [2024-12-06 06:55:42.451955] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:29:29.903 [2024-12-06 06:55:42.451966] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:29:33.183 [2024-12-06 06:55:45.177141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.183 [2024-12-06 06:55:45.177204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:29:33.183 [2024-12-06 06:55:45.177218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2725.175 ms 00:29:33.183 [2024-12-06 06:55:45.177229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.183 [2024-12-06 06:55:45.202937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.183 [2024-12-06 06:55:45.202987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:33.183 [2024-12-06 06:55:45.203000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.502 ms 00:29:33.183 [2024-12-06 06:55:45.203010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.183 [2024-12-06 06:55:45.203137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.183 [2024-12-06 06:55:45.203149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:33.183 [2024-12-06 06:55:45.203157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:29:33.183 [2024-12-06 06:55:45.203170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.183 [2024-12-06 06:55:45.233657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.183 [2024-12-06 06:55:45.233695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:33.183 [2024-12-06 06:55:45.233706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.452 ms 00:29:33.183 [2024-12-06 06:55:45.233715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.183 [2024-12-06 06:55:45.233745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.183 [2024-12-06 06:55:45.233758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:33.183 [2024-12-06 06:55:45.233766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:29:33.183 [2024-12-06 06:55:45.233781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.183 [2024-12-06 06:55:45.234114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.183 [2024-12-06 06:55:45.234133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:33.183 [2024-12-06 06:55:45.234142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:29:33.183 [2024-12-06 06:55:45.234150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.183 [2024-12-06 06:55:45.234254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.183 [2024-12-06 06:55:45.234264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:33.183 [2024-12-06 06:55:45.234274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:29:33.183 [2024-12-06 06:55:45.234285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.183 [2024-12-06 06:55:45.248425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.183 [2024-12-06 06:55:45.248594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:33.183 [2024-12-06 06:55:45.248611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.121 ms 00:29:33.183 [2024-12-06 06:55:45.248621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.183 [2024-12-06 06:55:45.273756] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:33.183 [2024-12-06 06:55:45.276604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.183 [2024-12-06 06:55:45.276638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:33.183 [2024-12-06 06:55:45.276653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.904 ms 00:29:33.183 [2024-12-06 06:55:45.276662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.183 [2024-12-06 06:55:45.338512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.183 [2024-12-06 06:55:45.338561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:29:33.183 [2024-12-06 06:55:45.338575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.807 ms 00:29:33.183 [2024-12-06 06:55:45.338584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.183 [2024-12-06 06:55:45.338765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.183 [2024-12-06 06:55:45.338778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:33.183 [2024-12-06 06:55:45.338790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:29:33.183 [2024-12-06 06:55:45.338798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.183 [2024-12-06 06:55:45.362147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.183 [2024-12-06 06:55:45.362186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:29:33.183 [2024-12-06 06:55:45.362199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.304 ms 00:29:33.183 [2024-12-06 06:55:45.362207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.183 [2024-12-06 06:55:45.384676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.183 [2024-12-06 06:55:45.384716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:29:33.183 [2024-12-06 06:55:45.384730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.428 ms 00:29:33.183 [2024-12-06 06:55:45.384738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.183 [2024-12-06 06:55:45.385314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.183 [2024-12-06 06:55:45.385334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:33.183 [2024-12-06 06:55:45.385344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.541 ms 00:29:33.183 [2024-12-06 06:55:45.385354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.183 [2024-12-06 06:55:45.453927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.183 [2024-12-06 06:55:45.453971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:29:33.183 [2024-12-06 06:55:45.453988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.524 ms 00:29:33.183 [2024-12-06 06:55:45.453996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.183 [2024-12-06 06:55:45.477882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.183 [2024-12-06 06:55:45.477922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:29:33.183 [2024-12-06 06:55:45.477937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.812 ms 00:29:33.183 [2024-12-06 06:55:45.477945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.183 [2024-12-06 06:55:45.500810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.183 [2024-12-06 06:55:45.500847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:29:33.183 [2024-12-06 06:55:45.500860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.823 ms 00:29:33.183 [2024-12-06 06:55:45.500868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.183 [2024-12-06 06:55:45.523489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.183 [2024-12-06 06:55:45.523698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:33.183 [2024-12-06 06:55:45.523718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.582 ms 00:29:33.183 [2024-12-06 06:55:45.523726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.183 [2024-12-06 06:55:45.523763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.183 [2024-12-06 06:55:45.523772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:33.183 [2024-12-06 06:55:45.523785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:33.183 [2024-12-06 06:55:45.523792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.183 [2024-12-06 06:55:45.523866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.183 [2024-12-06 06:55:45.523878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:33.183 [2024-12-06 06:55:45.523888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:29:33.183 [2024-12-06 06:55:45.523895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.183 [2024-12-06 06:55:45.524745] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3084.156 ms, result 0 00:29:33.183 { 00:29:33.183 "name": "ftl0", 00:29:33.183 "uuid": "1418b45b-9351-478f-9d9d-979a1b5eff85" 00:29:33.183 } 00:29:33.183 06:55:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:29:33.183 06:55:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:29:33.183 06:55:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:29:33.183 06:55:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:29:33.183 06:55:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:29:33.440 /dev/nbd0 00:29:33.440 06:55:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:29:33.440 06:55:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:29:33.440 06:55:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:29:33.440 06:55:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:33.440 06:55:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:33.440 06:55:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:29:33.440 06:55:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:29:33.440 06:55:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:33.440 06:55:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:33.440 06:55:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:29:33.440 1+0 records in 00:29:33.440 1+0 records out 00:29:33.440 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000219734 s, 18.6 MB/s 00:29:33.440 06:55:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:29:33.441 06:55:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:29:33.441 06:55:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:29:33.441 06:55:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:33.441 06:55:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:29:33.441 06:55:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:29:33.441 [2024-12-06 06:55:46.083109] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:29:33.441 [2024-12-06 06:55:46.083226] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79323 ] 00:29:33.698 [2024-12-06 06:55:46.242713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:33.698 [2024-12-06 06:55:46.342759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:35.072  [2024-12-06T06:55:48.749Z] Copying: 195/1024 [MB] (195 MBps) [2024-12-06T06:55:49.686Z] Copying: 392/1024 [MB] (196 MBps) [2024-12-06T06:55:50.625Z] Copying: 589/1024 [MB] (197 MBps) [2024-12-06T06:55:51.563Z] Copying: 832/1024 [MB] (243 MBps) [2024-12-06T06:55:52.132Z] Copying: 1024/1024 [MB] (average 214 MBps) 00:29:39.391 00:29:39.391 06:55:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:29:41.929 06:55:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:29:41.929 [2024-12-06 06:55:54.164883] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:29:41.929 [2024-12-06 06:55:54.165003] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79410 ] 00:29:41.929 [2024-12-06 06:55:54.325503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:41.929 [2024-12-06 06:55:54.422624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:43.306  [2024-12-06T06:55:57.004Z] Copying: 26/1024 [MB] (26 MBps) [2024-12-06T06:55:57.943Z] Copying: 52/1024 [MB] (26 MBps) [2024-12-06T06:55:58.884Z] Copying: 77/1024 [MB] (24 MBps) [2024-12-06T06:55:59.823Z] Copying: 103/1024 [MB] (26 MBps) [2024-12-06T06:56:00.761Z] Copying: 125/1024 [MB] (21 MBps) [2024-12-06T06:56:01.694Z] Copying: 153/1024 [MB] (28 MBps) [2024-12-06T06:56:03.068Z] Copying: 183/1024 [MB] (29 MBps) [2024-12-06T06:56:04.000Z] Copying: 212/1024 [MB] (28 MBps) [2024-12-06T06:56:04.946Z] Copying: 242/1024 [MB] (29 MBps) [2024-12-06T06:56:05.883Z] Copying: 271/1024 [MB] (28 MBps) [2024-12-06T06:56:06.823Z] Copying: 299/1024 [MB] (28 MBps) [2024-12-06T06:56:07.766Z] Copying: 314480/1048576 [kB] (7976 kBps) [2024-12-06T06:56:08.708Z] Copying: 319424/1048576 [kB] (4944 kBps) [2024-12-06T06:56:09.649Z] Copying: 336/1024 [MB] (24 MBps) [2024-12-06T06:56:11.034Z] Copying: 357/1024 [MB] (20 MBps) [2024-12-06T06:56:11.977Z] Copying: 378/1024 [MB] (21 MBps) [2024-12-06T06:56:12.916Z] Copying: 402/1024 [MB] (24 MBps) [2024-12-06T06:56:13.859Z] Copying: 426/1024 [MB] (24 MBps) [2024-12-06T06:56:14.802Z] Copying: 453/1024 [MB] (27 MBps) [2024-12-06T06:56:15.775Z] Copying: 476/1024 [MB] (22 MBps) [2024-12-06T06:56:16.714Z] Copying: 500/1024 [MB] (23 MBps) [2024-12-06T06:56:17.654Z] Copying: 528/1024 [MB] (27 MBps) [2024-12-06T06:56:19.028Z] Copying: 554/1024 [MB] (26 MBps) [2024-12-06T06:56:19.970Z] Copying: 586/1024 [MB] (31 MBps) [2024-12-06T06:56:20.913Z] Copying: 612/1024 [MB] (26 MBps) [2024-12-06T06:56:21.856Z] Copying: 642/1024 [MB] (29 MBps) [2024-12-06T06:56:22.795Z] Copying: 666/1024 [MB] (24 MBps) [2024-12-06T06:56:23.730Z] Copying: 695/1024 [MB] (29 MBps) [2024-12-06T06:56:24.664Z] Copying: 727/1024 [MB] (31 MBps) [2024-12-06T06:56:26.040Z] Copying: 758/1024 [MB] (31 MBps) [2024-12-06T06:56:26.975Z] Copying: 789/1024 [MB] (30 MBps) [2024-12-06T06:56:27.916Z] Copying: 818/1024 [MB] (29 MBps) [2024-12-06T06:56:28.855Z] Copying: 845/1024 [MB] (26 MBps) [2024-12-06T06:56:29.789Z] Copying: 873/1024 [MB] (28 MBps) [2024-12-06T06:56:30.817Z] Copying: 903/1024 [MB] (29 MBps) [2024-12-06T06:56:31.760Z] Copying: 932/1024 [MB] (29 MBps) [2024-12-06T06:56:32.704Z] Copying: 959/1024 [MB] (27 MBps) [2024-12-06T06:56:33.646Z] Copying: 987/1024 [MB] (27 MBps) [2024-12-06T06:56:33.908Z] Copying: 1017/1024 [MB] (29 MBps) [2024-12-06T06:56:34.480Z] Copying: 1024/1024 [MB] (average 26 MBps) 00:30:21.739 00:30:21.739 06:56:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:30:21.739 06:56:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:30:22.000 06:56:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:30:22.263 [2024-12-06 06:56:34.817853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:22.263 [2024-12-06 06:56:34.817896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:22.263 [2024-12-06 06:56:34.817907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:22.263 [2024-12-06 06:56:34.817915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.263 [2024-12-06 06:56:34.817936] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:22.263 [2024-12-06 06:56:34.820075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:22.263 [2024-12-06 06:56:34.820190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:22.263 [2024-12-06 06:56:34.820206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.124 ms 00:30:22.263 [2024-12-06 06:56:34.820213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.263 [2024-12-06 06:56:34.821691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:22.263 [2024-12-06 06:56:34.821718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:22.263 [2024-12-06 06:56:34.821728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.452 ms 00:30:22.263 [2024-12-06 06:56:34.821734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.263 [2024-12-06 06:56:34.835339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:22.263 [2024-12-06 06:56:34.835449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:22.263 [2024-12-06 06:56:34.835477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.575 ms 00:30:22.263 [2024-12-06 06:56:34.835485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.263 [2024-12-06 06:56:34.840441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:22.263 [2024-12-06 06:56:34.840474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:22.263 [2024-12-06 06:56:34.840484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.929 ms 00:30:22.263 [2024-12-06 06:56:34.840491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.263 [2024-12-06 06:56:34.859018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:22.263 [2024-12-06 06:56:34.859044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:22.263 [2024-12-06 06:56:34.859054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.471 ms 00:30:22.263 [2024-12-06 06:56:34.859060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.263 [2024-12-06 06:56:34.871277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:22.263 [2024-12-06 06:56:34.871306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:22.263 [2024-12-06 06:56:34.871320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.186 ms 00:30:22.263 [2024-12-06 06:56:34.871327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.263 [2024-12-06 06:56:34.871443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:22.263 [2024-12-06 06:56:34.871452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:22.263 [2024-12-06 06:56:34.871460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:30:22.263 [2024-12-06 06:56:34.871479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.263 [2024-12-06 06:56:34.889459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:22.263 [2024-12-06 06:56:34.889489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:22.263 [2024-12-06 06:56:34.889498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.964 ms 00:30:22.263 [2024-12-06 06:56:34.889504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.263 [2024-12-06 06:56:34.906494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:22.263 [2024-12-06 06:56:34.906519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:22.263 [2024-12-06 06:56:34.906528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.962 ms 00:30:22.263 [2024-12-06 06:56:34.906534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.263 [2024-12-06 06:56:34.923290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:22.263 [2024-12-06 06:56:34.923315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:22.263 [2024-12-06 06:56:34.923324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.719 ms 00:30:22.263 [2024-12-06 06:56:34.923330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.263 [2024-12-06 06:56:34.940472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:22.263 [2024-12-06 06:56:34.940621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:22.263 [2024-12-06 06:56:34.940637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.078 ms 00:30:22.263 [2024-12-06 06:56:34.940642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.263 [2024-12-06 06:56:34.940668] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:22.263 [2024-12-06 06:56:34.940679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:22.263 [2024-12-06 06:56:34.940689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:22.263 [2024-12-06 06:56:34.940695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:22.263 [2024-12-06 06:56:34.940703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:22.263 [2024-12-06 06:56:34.940709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:22.263 [2024-12-06 06:56:34.940717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:22.263 [2024-12-06 06:56:34.940723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:22.263 [2024-12-06 06:56:34.940732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:22.263 [2024-12-06 06:56:34.940738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:22.263 [2024-12-06 06:56:34.940745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:22.263 [2024-12-06 06:56:34.940751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:22.263 [2024-12-06 06:56:34.940758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:22.263 [2024-12-06 06:56:34.940764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:22.263 [2024-12-06 06:56:34.940771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:22.263 [2024-12-06 06:56:34.940776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:22.263 [2024-12-06 06:56:34.940783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:22.263 [2024-12-06 06:56:34.940789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:22.263 [2024-12-06 06:56:34.940796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:22.263 [2024-12-06 06:56:34.940802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:22.263 [2024-12-06 06:56:34.940810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:22.263 [2024-12-06 06:56:34.940815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:22.263 [2024-12-06 06:56:34.940823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:22.263 [2024-12-06 06:56:34.940829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:22.263 [2024-12-06 06:56:34.940838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:22.263 [2024-12-06 06:56:34.940844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:22.263 [2024-12-06 06:56:34.940851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.940856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.940864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.940870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.940877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.940883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.940891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.940897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.940904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.940910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.940917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.940923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.940931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.940936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.940944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.940950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.940957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.940963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.940970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.940976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.940983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.940994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.941002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.941007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.941014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.941020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.941027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.941033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.941040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.941046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.941054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.941060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.941067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.941073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.941080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.941086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.941092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.941099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.941107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.941113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.941120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.941126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.941133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.941139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.941146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.941152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.941162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.941168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.941175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.941180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.941188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.941193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.941201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.941206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.941213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.941219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.941226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.941232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.941239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.941245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.941252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.941258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.941266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.941272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.941290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.941296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.941303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.941308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.941315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.941321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.941328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.941337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.941345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.941351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.941358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:22.264 [2024-12-06 06:56:34.941370] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:22.264 [2024-12-06 06:56:34.941377] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1418b45b-9351-478f-9d9d-979a1b5eff85 00:30:22.264 [2024-12-06 06:56:34.941384] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:22.264 [2024-12-06 06:56:34.941392] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:30:22.264 [2024-12-06 06:56:34.941399] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:22.264 [2024-12-06 06:56:34.941406] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:22.264 [2024-12-06 06:56:34.941411] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:22.264 [2024-12-06 06:56:34.941419] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:22.264 [2024-12-06 06:56:34.941424] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:22.264 [2024-12-06 06:56:34.941430] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:22.264 [2024-12-06 06:56:34.941435] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:22.264 [2024-12-06 06:56:34.941442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:22.264 [2024-12-06 06:56:34.941447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:22.264 [2024-12-06 06:56:34.941455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.775 ms 00:30:22.264 [2024-12-06 06:56:34.941476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.264 [2024-12-06 06:56:34.951199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:22.264 [2024-12-06 06:56:34.951225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:22.264 [2024-12-06 06:56:34.951235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.697 ms 00:30:22.264 [2024-12-06 06:56:34.951241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.264 [2024-12-06 06:56:34.951539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:22.264 [2024-12-06 06:56:34.951547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:22.264 [2024-12-06 06:56:34.951555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:30:22.264 [2024-12-06 06:56:34.951560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.264 [2024-12-06 06:56:34.984453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:22.264 [2024-12-06 06:56:34.984493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:22.264 [2024-12-06 06:56:34.984503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:22.264 [2024-12-06 06:56:34.984509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.264 [2024-12-06 06:56:34.984555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:22.264 [2024-12-06 06:56:34.984562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:22.264 [2024-12-06 06:56:34.984570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:22.264 [2024-12-06 06:56:34.984576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.264 [2024-12-06 06:56:34.984630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:22.264 [2024-12-06 06:56:34.984639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:22.264 [2024-12-06 06:56:34.984647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:22.264 [2024-12-06 06:56:34.984653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.264 [2024-12-06 06:56:34.984669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:22.264 [2024-12-06 06:56:34.984674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:22.264 [2024-12-06 06:56:34.984681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:22.264 [2024-12-06 06:56:34.984687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.531 [2024-12-06 06:56:35.044197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:22.531 [2024-12-06 06:56:35.044342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:22.531 [2024-12-06 06:56:35.044359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:22.531 [2024-12-06 06:56:35.044365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.531 [2024-12-06 06:56:35.093030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:22.531 [2024-12-06 06:56:35.093067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:22.531 [2024-12-06 06:56:35.093078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:22.531 [2024-12-06 06:56:35.093084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.531 [2024-12-06 06:56:35.093176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:22.531 [2024-12-06 06:56:35.093185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:22.531 [2024-12-06 06:56:35.093195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:22.531 [2024-12-06 06:56:35.093201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.531 [2024-12-06 06:56:35.093241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:22.531 [2024-12-06 06:56:35.093248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:22.531 [2024-12-06 06:56:35.093256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:22.531 [2024-12-06 06:56:35.093262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.531 [2024-12-06 06:56:35.093332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:22.531 [2024-12-06 06:56:35.093339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:22.531 [2024-12-06 06:56:35.093347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:22.531 [2024-12-06 06:56:35.093354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.531 [2024-12-06 06:56:35.093380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:22.531 [2024-12-06 06:56:35.093387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:22.531 [2024-12-06 06:56:35.093394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:22.531 [2024-12-06 06:56:35.093400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.531 [2024-12-06 06:56:35.093429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:22.532 [2024-12-06 06:56:35.093436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:22.532 [2024-12-06 06:56:35.093444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:22.532 [2024-12-06 06:56:35.093451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.532 [2024-12-06 06:56:35.093505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:22.532 [2024-12-06 06:56:35.093513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:22.532 [2024-12-06 06:56:35.093521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:22.532 [2024-12-06 06:56:35.093527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.532 [2024-12-06 06:56:35.093630] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 275.748 ms, result 0 00:30:22.532 true 00:30:22.532 06:56:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 79185 00:30:22.532 06:56:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid79185 00:30:22.532 06:56:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:30:22.532 [2024-12-06 06:56:35.182689] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:30:22.532 [2024-12-06 06:56:35.182955] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79835 ] 00:30:22.794 [2024-12-06 06:56:35.338291] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:22.794 [2024-12-06 06:56:35.420811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:24.179  [2024-12-06T06:56:37.862Z] Copying: 249/1024 [MB] (249 MBps) [2024-12-06T06:56:38.804Z] Copying: 503/1024 [MB] (254 MBps) [2024-12-06T06:56:39.810Z] Copying: 758/1024 [MB] (254 MBps) [2024-12-06T06:56:39.810Z] Copying: 1007/1024 [MB] (249 MBps) [2024-12-06T06:56:40.378Z] Copying: 1024/1024 [MB] (average 251 MBps) 00:30:27.637 00:30:27.637 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 79185 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:30:27.638 06:56:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:27.638 [2024-12-06 06:56:40.306233] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:30:27.638 [2024-12-06 06:56:40.306703] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79894 ] 00:30:27.897 [2024-12-06 06:56:40.464477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:27.897 [2024-12-06 06:56:40.568441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:28.158 [2024-12-06 06:56:40.836217] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:28.158 [2024-12-06 06:56:40.836286] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:28.419 [2024-12-06 06:56:40.900381] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:30:28.419 [2024-12-06 06:56:40.900618] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:30:28.419 [2024-12-06 06:56:40.900890] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:30:28.419 [2024-12-06 06:56:41.117074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.419 [2024-12-06 06:56:41.117115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:28.419 [2024-12-06 06:56:41.117126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:28.419 [2024-12-06 06:56:41.117134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.419 [2024-12-06 06:56:41.117175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.419 [2024-12-06 06:56:41.117183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:28.419 [2024-12-06 06:56:41.117189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:30:28.419 [2024-12-06 06:56:41.117196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.419 [2024-12-06 06:56:41.117209] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:28.419 [2024-12-06 06:56:41.117751] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:28.419 [2024-12-06 06:56:41.117769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.419 [2024-12-06 06:56:41.117776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:28.419 [2024-12-06 06:56:41.117783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.564 ms 00:30:28.419 [2024-12-06 06:56:41.117788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.419 [2024-12-06 06:56:41.119086] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:30:28.419 [2024-12-06 06:56:41.128775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.419 [2024-12-06 06:56:41.128805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:28.419 [2024-12-06 06:56:41.128815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.692 ms 00:30:28.419 [2024-12-06 06:56:41.128822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.419 [2024-12-06 06:56:41.128869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.419 [2024-12-06 06:56:41.128877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:28.419 [2024-12-06 06:56:41.128884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:30:28.419 [2024-12-06 06:56:41.128890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.419 [2024-12-06 06:56:41.133323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.419 [2024-12-06 06:56:41.133347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:28.419 [2024-12-06 06:56:41.133356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.387 ms 00:30:28.419 [2024-12-06 06:56:41.133362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.419 [2024-12-06 06:56:41.133417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.419 [2024-12-06 06:56:41.133425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:28.419 [2024-12-06 06:56:41.133431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:30:28.419 [2024-12-06 06:56:41.133437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.419 [2024-12-06 06:56:41.133493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.419 [2024-12-06 06:56:41.133502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:28.419 [2024-12-06 06:56:41.133508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:28.419 [2024-12-06 06:56:41.133514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.419 [2024-12-06 06:56:41.133533] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:28.419 [2024-12-06 06:56:41.136130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.419 [2024-12-06 06:56:41.136153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:28.419 [2024-12-06 06:56:41.136160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.603 ms 00:30:28.419 [2024-12-06 06:56:41.136166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.419 [2024-12-06 06:56:41.136191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.420 [2024-12-06 06:56:41.136198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:28.420 [2024-12-06 06:56:41.136204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:30:28.420 [2024-12-06 06:56:41.136210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.420 [2024-12-06 06:56:41.136227] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:28.420 [2024-12-06 06:56:41.136242] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:28.420 [2024-12-06 06:56:41.136269] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:28.420 [2024-12-06 06:56:41.136282] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:28.420 [2024-12-06 06:56:41.136362] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:28.420 [2024-12-06 06:56:41.136370] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:28.420 [2024-12-06 06:56:41.136378] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:28.420 [2024-12-06 06:56:41.136387] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:28.420 [2024-12-06 06:56:41.136395] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:28.420 [2024-12-06 06:56:41.136401] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:28.420 [2024-12-06 06:56:41.136407] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:28.420 [2024-12-06 06:56:41.136413] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:28.420 [2024-12-06 06:56:41.136419] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:28.420 [2024-12-06 06:56:41.136424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.420 [2024-12-06 06:56:41.136430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:28.420 [2024-12-06 06:56:41.136436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.199 ms 00:30:28.420 [2024-12-06 06:56:41.136442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.420 [2024-12-06 06:56:41.136520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.420 [2024-12-06 06:56:41.136530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:28.420 [2024-12-06 06:56:41.136536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:30:28.420 [2024-12-06 06:56:41.136542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.420 [2024-12-06 06:56:41.136622] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:28.420 [2024-12-06 06:56:41.136633] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:28.420 [2024-12-06 06:56:41.136640] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:28.420 [2024-12-06 06:56:41.136646] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:28.420 [2024-12-06 06:56:41.136652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:28.420 [2024-12-06 06:56:41.136657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:28.420 [2024-12-06 06:56:41.136663] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:28.420 [2024-12-06 06:56:41.136668] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:28.420 [2024-12-06 06:56:41.136674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:28.420 [2024-12-06 06:56:41.136683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:28.420 [2024-12-06 06:56:41.136688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:28.420 [2024-12-06 06:56:41.136694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:28.420 [2024-12-06 06:56:41.136699] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:28.420 [2024-12-06 06:56:41.136704] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:28.420 [2024-12-06 06:56:41.136710] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:28.420 [2024-12-06 06:56:41.136715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:28.420 [2024-12-06 06:56:41.136721] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:28.420 [2024-12-06 06:56:41.136727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:28.420 [2024-12-06 06:56:41.136732] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:28.420 [2024-12-06 06:56:41.136737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:28.420 [2024-12-06 06:56:41.136742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:28.420 [2024-12-06 06:56:41.136747] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:28.420 [2024-12-06 06:56:41.136753] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:28.420 [2024-12-06 06:56:41.136758] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:28.420 [2024-12-06 06:56:41.136762] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:28.420 [2024-12-06 06:56:41.136768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:28.420 [2024-12-06 06:56:41.136773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:28.420 [2024-12-06 06:56:41.136778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:28.420 [2024-12-06 06:56:41.136783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:28.420 [2024-12-06 06:56:41.136788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:28.420 [2024-12-06 06:56:41.136793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:28.420 [2024-12-06 06:56:41.136798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:28.420 [2024-12-06 06:56:41.136803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:28.420 [2024-12-06 06:56:41.136808] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:28.420 [2024-12-06 06:56:41.136813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:28.420 [2024-12-06 06:56:41.136818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:28.420 [2024-12-06 06:56:41.136822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:28.420 [2024-12-06 06:56:41.136829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:28.420 [2024-12-06 06:56:41.136834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:28.420 [2024-12-06 06:56:41.136839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:28.420 [2024-12-06 06:56:41.136844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:28.420 [2024-12-06 06:56:41.136849] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:28.420 [2024-12-06 06:56:41.136854] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:28.420 [2024-12-06 06:56:41.136859] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:28.420 [2024-12-06 06:56:41.136865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:28.420 [2024-12-06 06:56:41.136872] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:28.420 [2024-12-06 06:56:41.136877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:28.420 [2024-12-06 06:56:41.136883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:28.420 [2024-12-06 06:56:41.136889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:28.420 [2024-12-06 06:56:41.136894] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:28.420 [2024-12-06 06:56:41.136899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:28.420 [2024-12-06 06:56:41.136905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:28.420 [2024-12-06 06:56:41.136910] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:28.420 [2024-12-06 06:56:41.136916] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:28.420 [2024-12-06 06:56:41.136923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:28.420 [2024-12-06 06:56:41.136930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:28.420 [2024-12-06 06:56:41.136935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:28.420 [2024-12-06 06:56:41.136941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:28.420 [2024-12-06 06:56:41.136946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:28.420 [2024-12-06 06:56:41.136952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:28.420 [2024-12-06 06:56:41.136958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:28.420 [2024-12-06 06:56:41.136963] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:28.420 [2024-12-06 06:56:41.136968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:28.420 [2024-12-06 06:56:41.136974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:28.420 [2024-12-06 06:56:41.136979] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:28.420 [2024-12-06 06:56:41.136984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:28.420 [2024-12-06 06:56:41.136990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:28.420 [2024-12-06 06:56:41.136996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:28.420 [2024-12-06 06:56:41.137001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:28.420 [2024-12-06 06:56:41.137007] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:28.420 [2024-12-06 06:56:41.137013] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:28.420 [2024-12-06 06:56:41.137019] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:28.420 [2024-12-06 06:56:41.137025] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:28.420 [2024-12-06 06:56:41.137030] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:28.421 [2024-12-06 06:56:41.137035] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:28.421 [2024-12-06 06:56:41.137041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.421 [2024-12-06 06:56:41.137047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:28.421 [2024-12-06 06:56:41.137052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.473 ms 00:30:28.421 [2024-12-06 06:56:41.137058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.682 [2024-12-06 06:56:41.157864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.682 [2024-12-06 06:56:41.157977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:28.682 [2024-12-06 06:56:41.157991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.772 ms 00:30:28.682 [2024-12-06 06:56:41.157998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.682 [2024-12-06 06:56:41.158068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.682 [2024-12-06 06:56:41.158074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:28.682 [2024-12-06 06:56:41.158080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:30:28.682 [2024-12-06 06:56:41.158086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.682 [2024-12-06 06:56:41.202348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.682 [2024-12-06 06:56:41.202388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:28.682 [2024-12-06 06:56:41.202401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.218 ms 00:30:28.682 [2024-12-06 06:56:41.202407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.682 [2024-12-06 06:56:41.202455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.682 [2024-12-06 06:56:41.202477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:28.682 [2024-12-06 06:56:41.202484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:28.682 [2024-12-06 06:56:41.202490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.682 [2024-12-06 06:56:41.202822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.682 [2024-12-06 06:56:41.202848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:28.682 [2024-12-06 06:56:41.202856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:30:28.682 [2024-12-06 06:56:41.202867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.682 [2024-12-06 06:56:41.202969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.682 [2024-12-06 06:56:41.202980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:28.682 [2024-12-06 06:56:41.202987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:30:28.682 [2024-12-06 06:56:41.202993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.682 [2024-12-06 06:56:41.213645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.682 [2024-12-06 06:56:41.213668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:28.682 [2024-12-06 06:56:41.213677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.633 ms 00:30:28.682 [2024-12-06 06:56:41.213682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.682 [2024-12-06 06:56:41.223371] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:28.682 [2024-12-06 06:56:41.223411] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:28.682 [2024-12-06 06:56:41.223421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.682 [2024-12-06 06:56:41.223429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:28.682 [2024-12-06 06:56:41.223436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.643 ms 00:30:28.682 [2024-12-06 06:56:41.223442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.682 [2024-12-06 06:56:41.243205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.682 [2024-12-06 06:56:41.243248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:28.682 [2024-12-06 06:56:41.243261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.321 ms 00:30:28.682 [2024-12-06 06:56:41.243268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.682 [2024-12-06 06:56:41.252346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.682 [2024-12-06 06:56:41.252374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:28.682 [2024-12-06 06:56:41.252382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.038 ms 00:30:28.682 [2024-12-06 06:56:41.252388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.682 [2024-12-06 06:56:41.261287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.682 [2024-12-06 06:56:41.261399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:28.682 [2024-12-06 06:56:41.261412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.869 ms 00:30:28.682 [2024-12-06 06:56:41.261418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.682 [2024-12-06 06:56:41.261902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.682 [2024-12-06 06:56:41.261913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:28.682 [2024-12-06 06:56:41.261920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.416 ms 00:30:28.682 [2024-12-06 06:56:41.261926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.682 [2024-12-06 06:56:41.306848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.682 [2024-12-06 06:56:41.306893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:28.682 [2024-12-06 06:56:41.306904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.907 ms 00:30:28.682 [2024-12-06 06:56:41.306910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.682 [2024-12-06 06:56:41.315124] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:28.682 [2024-12-06 06:56:41.317453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.682 [2024-12-06 06:56:41.317483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:28.682 [2024-12-06 06:56:41.317494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.495 ms 00:30:28.682 [2024-12-06 06:56:41.317505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.682 [2024-12-06 06:56:41.317578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.682 [2024-12-06 06:56:41.317587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:28.682 [2024-12-06 06:56:41.317594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:30:28.682 [2024-12-06 06:56:41.317600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.682 [2024-12-06 06:56:41.317665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.682 [2024-12-06 06:56:41.317673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:28.682 [2024-12-06 06:56:41.317679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:30:28.682 [2024-12-06 06:56:41.317686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.682 [2024-12-06 06:56:41.317704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.682 [2024-12-06 06:56:41.317710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:28.682 [2024-12-06 06:56:41.317716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:28.682 [2024-12-06 06:56:41.317722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.682 [2024-12-06 06:56:41.317747] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:28.682 [2024-12-06 06:56:41.317755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.682 [2024-12-06 06:56:41.317760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:28.682 [2024-12-06 06:56:41.317766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:30:28.682 [2024-12-06 06:56:41.317774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.682 [2024-12-06 06:56:41.335997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.682 [2024-12-06 06:56:41.336104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:28.682 [2024-12-06 06:56:41.336160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.207 ms 00:30:28.682 [2024-12-06 06:56:41.336179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.682 [2024-12-06 06:56:41.336522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:28.682 [2024-12-06 06:56:41.336631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:28.682 [2024-12-06 06:56:41.336656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:30:28.682 [2024-12-06 06:56:41.336672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.682 [2024-12-06 06:56:41.337551] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 220.128 ms, result 0 00:30:29.622  [2024-12-06T06:56:43.748Z] Copying: 43/1024 [MB] (43 MBps) [2024-12-06T06:56:44.691Z] Copying: 86/1024 [MB] (43 MBps) [2024-12-06T06:56:45.643Z] Copying: 126/1024 [MB] (39 MBps) [2024-12-06T06:56:46.584Z] Copying: 164/1024 [MB] (38 MBps) [2024-12-06T06:56:47.607Z] Copying: 208/1024 [MB] (44 MBps) [2024-12-06T06:56:48.550Z] Copying: 248/1024 [MB] (39 MBps) [2024-12-06T06:56:49.497Z] Copying: 276/1024 [MB] (28 MBps) [2024-12-06T06:56:50.441Z] Copying: 316/1024 [MB] (40 MBps) [2024-12-06T06:56:51.383Z] Copying: 355/1024 [MB] (39 MBps) [2024-12-06T06:56:52.785Z] Copying: 391/1024 [MB] (35 MBps) [2024-12-06T06:56:53.755Z] Copying: 427/1024 [MB] (36 MBps) [2024-12-06T06:56:54.685Z] Copying: 471/1024 [MB] (43 MBps) [2024-12-06T06:56:55.646Z] Copying: 514/1024 [MB] (43 MBps) [2024-12-06T06:56:56.591Z] Copying: 560/1024 [MB] (45 MBps) [2024-12-06T06:56:57.538Z] Copying: 602/1024 [MB] (42 MBps) [2024-12-06T06:56:58.482Z] Copying: 639/1024 [MB] (37 MBps) [2024-12-06T06:56:59.424Z] Copying: 681/1024 [MB] (41 MBps) [2024-12-06T06:57:00.366Z] Copying: 725/1024 [MB] (43 MBps) [2024-12-06T06:57:01.746Z] Copying: 772/1024 [MB] (46 MBps) [2024-12-06T06:57:02.675Z] Copying: 815/1024 [MB] (43 MBps) [2024-12-06T06:57:03.604Z] Copying: 853/1024 [MB] (38 MBps) [2024-12-06T06:57:04.533Z] Copying: 884/1024 [MB] (30 MBps) [2024-12-06T06:57:05.464Z] Copying: 926/1024 [MB] (41 MBps) [2024-12-06T06:57:06.408Z] Copying: 970/1024 [MB] (44 MBps) [2024-12-06T06:57:07.781Z] Copying: 1015/1024 [MB] (44 MBps) [2024-12-06T06:57:07.781Z] Copying: 1048332/1048576 [kB] (8608 kBps) [2024-12-06T06:57:07.781Z] Copying: 1024/1024 [MB] (average 38 MBps)[2024-12-06 06:57:07.626896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:55.040 [2024-12-06 06:57:07.626951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:55.040 [2024-12-06 06:57:07.626966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:55.040 [2024-12-06 06:57:07.626975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.040 [2024-12-06 06:57:07.627928] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:55.040 [2024-12-06 06:57:07.632536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:55.040 [2024-12-06 06:57:07.632569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:55.040 [2024-12-06 06:57:07.632580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.585 ms 00:30:55.040 [2024-12-06 06:57:07.632594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.040 [2024-12-06 06:57:07.644736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:55.040 [2024-12-06 06:57:07.644768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:55.040 [2024-12-06 06:57:07.644778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.970 ms 00:30:55.040 [2024-12-06 06:57:07.644786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.040 [2024-12-06 06:57:07.662388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:55.040 [2024-12-06 06:57:07.662419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:55.040 [2024-12-06 06:57:07.662430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.586 ms 00:30:55.040 [2024-12-06 06:57:07.662438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.040 [2024-12-06 06:57:07.668620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:55.040 [2024-12-06 06:57:07.668649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:55.040 [2024-12-06 06:57:07.668658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.141 ms 00:30:55.040 [2024-12-06 06:57:07.668666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.040 [2024-12-06 06:57:07.692308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:55.040 [2024-12-06 06:57:07.692350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:55.040 [2024-12-06 06:57:07.692361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.596 ms 00:30:55.040 [2024-12-06 06:57:07.692369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.040 [2024-12-06 06:57:07.706040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:55.040 [2024-12-06 06:57:07.706074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:55.040 [2024-12-06 06:57:07.706085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.639 ms 00:30:55.040 [2024-12-06 06:57:07.706095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.040 [2024-12-06 06:57:07.762798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:55.040 [2024-12-06 06:57:07.762936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:55.040 [2024-12-06 06:57:07.762958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.668 ms 00:30:55.040 [2024-12-06 06:57:07.762966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.299 [2024-12-06 06:57:07.786259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:55.299 [2024-12-06 06:57:07.786387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:55.299 [2024-12-06 06:57:07.786402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.275 ms 00:30:55.299 [2024-12-06 06:57:07.786419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.299 [2024-12-06 06:57:07.809190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:55.299 [2024-12-06 06:57:07.809305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:55.299 [2024-12-06 06:57:07.809320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.742 ms 00:30:55.299 [2024-12-06 06:57:07.809327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.299 [2024-12-06 06:57:07.831572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:55.299 [2024-12-06 06:57:07.831685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:55.299 [2024-12-06 06:57:07.831699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.218 ms 00:30:55.299 [2024-12-06 06:57:07.831706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.299 [2024-12-06 06:57:07.854160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:55.299 [2024-12-06 06:57:07.854191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:55.299 [2024-12-06 06:57:07.854200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.390 ms 00:30:55.299 [2024-12-06 06:57:07.854207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.299 [2024-12-06 06:57:07.854237] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:55.299 [2024-12-06 06:57:07.854251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 125952 / 261120 wr_cnt: 1 state: open 00:30:55.299 [2024-12-06 06:57:07.854260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:55.299 [2024-12-06 06:57:07.854269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:55.299 [2024-12-06 06:57:07.854276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:55.299 [2024-12-06 06:57:07.854284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:55.299 [2024-12-06 06:57:07.854291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:55.299 [2024-12-06 06:57:07.854298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:55.299 [2024-12-06 06:57:07.854306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:55.299 [2024-12-06 06:57:07.854313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:55.299 [2024-12-06 06:57:07.854321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:55.299 [2024-12-06 06:57:07.854328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:55.299 [2024-12-06 06:57:07.854336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:55.299 [2024-12-06 06:57:07.854343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:55.299 [2024-12-06 06:57:07.854350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:55.299 [2024-12-06 06:57:07.854358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:55.299 [2024-12-06 06:57:07.854365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:55.299 [2024-12-06 06:57:07.854373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:55.299 [2024-12-06 06:57:07.854380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:55.299 [2024-12-06 06:57:07.854387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:55.299 [2024-12-06 06:57:07.854394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:55.299 [2024-12-06 06:57:07.854402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:55.299 [2024-12-06 06:57:07.854409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:55.299 [2024-12-06 06:57:07.854416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:55.299 [2024-12-06 06:57:07.854424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:55.299 [2024-12-06 06:57:07.854431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:55.299 [2024-12-06 06:57:07.854438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:55.299 [2024-12-06 06:57:07.854447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:55.299 [2024-12-06 06:57:07.854454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:55.299 [2024-12-06 06:57:07.854481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:55.299 [2024-12-06 06:57:07.854490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:55.299 [2024-12-06 06:57:07.854498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:55.299 [2024-12-06 06:57:07.854506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:55.299 [2024-12-06 06:57:07.854528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:55.299 [2024-12-06 06:57:07.854536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:55.300 [2024-12-06 06:57:07.854544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:55.300 [2024-12-06 06:57:07.854551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:55.300 [2024-12-06 06:57:07.854559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:55.300 [2024-12-06 06:57:07.854566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:55.300 [2024-12-06 06:57:07.854574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:55.300 [2024-12-06 06:57:07.854581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:55.300 [2024-12-06 06:57:07.854588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:55.300 [2024-12-06 06:57:07.854596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:55.300 [2024-12-06 06:57:07.854604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:55.300 [2024-12-06 06:57:07.854611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:55.300 [2024-12-06 06:57:07.854619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:55.300 [2024-12-06 06:57:07.854626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:55.300 [2024-12-06 06:57:07.854634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:55.300 [2024-12-06 06:57:07.854641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:55.300 [2024-12-06 06:57:07.854649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:55.300 [2024-12-06 06:57:07.854657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:55.300 [2024-12-06 06:57:07.854664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:55.300 [2024-12-06 06:57:07.854671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:55.300 [2024-12-06 06:57:07.854679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:55.300 [2024-12-06 06:57:07.854686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:55.300 [2024-12-06 06:57:07.854693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:55.300 [2024-12-06 06:57:07.854701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:55.300 [2024-12-06 06:57:07.854708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:55.300 [2024-12-06 06:57:07.854715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:55.300 [2024-12-06 06:57:07.854723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:55.300 [2024-12-06 06:57:07.854730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:55.300 [2024-12-06 06:57:07.854738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:55.300 [2024-12-06 06:57:07.854745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:55.300 [2024-12-06 06:57:07.854752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:55.300 [2024-12-06 06:57:07.854760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:55.300 [2024-12-06 06:57:07.854767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:55.300 [2024-12-06 06:57:07.854775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:55.300 [2024-12-06 06:57:07.854782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:55.300 [2024-12-06 06:57:07.854790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:55.300 [2024-12-06 06:57:07.854797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:55.300 [2024-12-06 06:57:07.854804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:55.300 [2024-12-06 06:57:07.854811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:55.300 [2024-12-06 06:57:07.854818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:55.300 [2024-12-06 06:57:07.854825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:55.300 [2024-12-06 06:57:07.854833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:55.300 [2024-12-06 06:57:07.854840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:55.300 [2024-12-06 06:57:07.854847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:55.300 [2024-12-06 06:57:07.854855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:55.300 [2024-12-06 06:57:07.854863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:55.300 [2024-12-06 06:57:07.854870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:55.300 [2024-12-06 06:57:07.854877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:55.300 [2024-12-06 06:57:07.854885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:55.300 [2024-12-06 06:57:07.854892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:55.300 [2024-12-06 06:57:07.854899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:55.300 [2024-12-06 06:57:07.854907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:55.300 [2024-12-06 06:57:07.854916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:55.300 [2024-12-06 06:57:07.854923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:55.300 [2024-12-06 06:57:07.854930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:55.300 [2024-12-06 06:57:07.854937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:55.300 [2024-12-06 06:57:07.854945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:55.300 [2024-12-06 06:57:07.854952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:55.300 [2024-12-06 06:57:07.854960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:55.300 [2024-12-06 06:57:07.854967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:55.300 [2024-12-06 06:57:07.854975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:55.300 [2024-12-06 06:57:07.854983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:55.300 [2024-12-06 06:57:07.854990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:55.300 [2024-12-06 06:57:07.854998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:55.300 [2024-12-06 06:57:07.855005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:55.300 [2024-12-06 06:57:07.855013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:55.300 [2024-12-06 06:57:07.855020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:55.300 [2024-12-06 06:57:07.855027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:55.300 [2024-12-06 06:57:07.855043] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:55.300 [2024-12-06 06:57:07.855050] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1418b45b-9351-478f-9d9d-979a1b5eff85 00:30:55.300 [2024-12-06 06:57:07.855067] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 125952 00:30:55.300 [2024-12-06 06:57:07.855074] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 126912 00:30:55.300 [2024-12-06 06:57:07.855081] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 125952 00:30:55.300 [2024-12-06 06:57:07.855089] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0076 00:30:55.300 [2024-12-06 06:57:07.855096] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:55.300 [2024-12-06 06:57:07.855104] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:55.300 [2024-12-06 06:57:07.855111] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:55.300 [2024-12-06 06:57:07.855118] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:55.300 [2024-12-06 06:57:07.855124] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:55.300 [2024-12-06 06:57:07.855131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:55.300 [2024-12-06 06:57:07.855139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:55.300 [2024-12-06 06:57:07.855147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.894 ms 00:30:55.300 [2024-12-06 06:57:07.855154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.300 [2024-12-06 06:57:07.867196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:55.300 [2024-12-06 06:57:07.867225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:55.300 [2024-12-06 06:57:07.867235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.026 ms 00:30:55.300 [2024-12-06 06:57:07.867242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.300 [2024-12-06 06:57:07.867628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:55.300 [2024-12-06 06:57:07.867642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:55.300 [2024-12-06 06:57:07.867655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.358 ms 00:30:55.300 [2024-12-06 06:57:07.867662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.300 [2024-12-06 06:57:07.899964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:55.300 [2024-12-06 06:57:07.899997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:55.300 [2024-12-06 06:57:07.900006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:55.300 [2024-12-06 06:57:07.900013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.300 [2024-12-06 06:57:07.900070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:55.301 [2024-12-06 06:57:07.900079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:55.301 [2024-12-06 06:57:07.900089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:55.301 [2024-12-06 06:57:07.900096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.301 [2024-12-06 06:57:07.900142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:55.301 [2024-12-06 06:57:07.900151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:55.301 [2024-12-06 06:57:07.900159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:55.301 [2024-12-06 06:57:07.900166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.301 [2024-12-06 06:57:07.900180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:55.301 [2024-12-06 06:57:07.900188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:55.301 [2024-12-06 06:57:07.900196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:55.301 [2024-12-06 06:57:07.900202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.301 [2024-12-06 06:57:07.977298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:55.301 [2024-12-06 06:57:07.977453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:55.301 [2024-12-06 06:57:07.977485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:55.301 [2024-12-06 06:57:07.977494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.563 [2024-12-06 06:57:08.040718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:55.563 [2024-12-06 06:57:08.040859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:55.564 [2024-12-06 06:57:08.040874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:55.564 [2024-12-06 06:57:08.040886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.564 [2024-12-06 06:57:08.040949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:55.564 [2024-12-06 06:57:08.040959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:55.564 [2024-12-06 06:57:08.040966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:55.564 [2024-12-06 06:57:08.040974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.564 [2024-12-06 06:57:08.041006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:55.564 [2024-12-06 06:57:08.041015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:55.564 [2024-12-06 06:57:08.041022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:55.564 [2024-12-06 06:57:08.041030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.564 [2024-12-06 06:57:08.041116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:55.564 [2024-12-06 06:57:08.041126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:55.564 [2024-12-06 06:57:08.041133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:55.564 [2024-12-06 06:57:08.041141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.564 [2024-12-06 06:57:08.041172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:55.564 [2024-12-06 06:57:08.041181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:55.564 [2024-12-06 06:57:08.041188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:55.564 [2024-12-06 06:57:08.041196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.564 [2024-12-06 06:57:08.041231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:55.564 [2024-12-06 06:57:08.041240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:55.564 [2024-12-06 06:57:08.041247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:55.564 [2024-12-06 06:57:08.041254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.564 [2024-12-06 06:57:08.041293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:55.564 [2024-12-06 06:57:08.041303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:55.564 [2024-12-06 06:57:08.041311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:55.564 [2024-12-06 06:57:08.041318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.564 [2024-12-06 06:57:08.041425] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 417.303 ms, result 0 00:30:58.114 00:30:58.114 00:30:58.114 06:57:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:31:00.027 06:57:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:00.028 [2024-12-06 06:57:12.742319] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:31:00.028 [2024-12-06 06:57:12.742456] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80218 ] 00:31:00.290 [2024-12-06 06:57:12.904654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:00.290 [2024-12-06 06:57:12.988710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:00.552 [2024-12-06 06:57:13.201187] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:00.552 [2024-12-06 06:57:13.201367] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:00.815 [2024-12-06 06:57:13.358510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.815 [2024-12-06 06:57:13.358680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:00.815 [2024-12-06 06:57:13.358747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:00.815 [2024-12-06 06:57:13.358772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.815 [2024-12-06 06:57:13.358842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.815 [2024-12-06 06:57:13.358871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:00.815 [2024-12-06 06:57:13.358891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:31:00.815 [2024-12-06 06:57:13.358910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.815 [2024-12-06 06:57:13.358942] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:00.815 [2024-12-06 06:57:13.359820] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:00.815 [2024-12-06 06:57:13.359939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.815 [2024-12-06 06:57:13.359962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:00.815 [2024-12-06 06:57:13.360034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.002 ms 00:31:00.815 [2024-12-06 06:57:13.360083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.815 [2024-12-06 06:57:13.361696] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:31:00.815 [2024-12-06 06:57:13.375075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.815 [2024-12-06 06:57:13.375112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:00.815 [2024-12-06 06:57:13.375124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.382 ms 00:31:00.815 [2024-12-06 06:57:13.375133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.815 [2024-12-06 06:57:13.375195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.815 [2024-12-06 06:57:13.375205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:00.815 [2024-12-06 06:57:13.375213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:31:00.815 [2024-12-06 06:57:13.375221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.815 [2024-12-06 06:57:13.380848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.815 [2024-12-06 06:57:13.380882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:00.815 [2024-12-06 06:57:13.380892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.564 ms 00:31:00.815 [2024-12-06 06:57:13.380903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.815 [2024-12-06 06:57:13.380979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.815 [2024-12-06 06:57:13.380988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:00.815 [2024-12-06 06:57:13.380997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:31:00.815 [2024-12-06 06:57:13.381004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.815 [2024-12-06 06:57:13.381042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.815 [2024-12-06 06:57:13.381051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:00.815 [2024-12-06 06:57:13.381060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:00.815 [2024-12-06 06:57:13.381068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.815 [2024-12-06 06:57:13.381093] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:00.815 [2024-12-06 06:57:13.384612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.815 [2024-12-06 06:57:13.384642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:00.815 [2024-12-06 06:57:13.384655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.526 ms 00:31:00.815 [2024-12-06 06:57:13.384662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.815 [2024-12-06 06:57:13.384692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.815 [2024-12-06 06:57:13.384700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:00.815 [2024-12-06 06:57:13.384708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:31:00.815 [2024-12-06 06:57:13.384715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.815 [2024-12-06 06:57:13.384735] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:00.815 [2024-12-06 06:57:13.384754] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:00.815 [2024-12-06 06:57:13.384789] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:00.815 [2024-12-06 06:57:13.384806] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:00.815 [2024-12-06 06:57:13.384909] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:00.815 [2024-12-06 06:57:13.384919] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:00.815 [2024-12-06 06:57:13.384930] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:00.815 [2024-12-06 06:57:13.384940] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:00.815 [2024-12-06 06:57:13.384948] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:00.815 [2024-12-06 06:57:13.384956] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:00.815 [2024-12-06 06:57:13.384963] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:00.815 [2024-12-06 06:57:13.384973] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:00.815 [2024-12-06 06:57:13.384980] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:00.815 [2024-12-06 06:57:13.384988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.815 [2024-12-06 06:57:13.384995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:00.815 [2024-12-06 06:57:13.385003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.254 ms 00:31:00.815 [2024-12-06 06:57:13.385010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.815 [2024-12-06 06:57:13.385093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.815 [2024-12-06 06:57:13.385101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:00.816 [2024-12-06 06:57:13.385108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:31:00.816 [2024-12-06 06:57:13.385115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.816 [2024-12-06 06:57:13.385233] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:00.816 [2024-12-06 06:57:13.385244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:00.816 [2024-12-06 06:57:13.385252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:00.816 [2024-12-06 06:57:13.385260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:00.816 [2024-12-06 06:57:13.385267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:00.816 [2024-12-06 06:57:13.385274] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:00.816 [2024-12-06 06:57:13.385281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:00.816 [2024-12-06 06:57:13.385289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:00.816 [2024-12-06 06:57:13.385296] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:00.816 [2024-12-06 06:57:13.385303] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:00.816 [2024-12-06 06:57:13.385310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:00.816 [2024-12-06 06:57:13.385316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:00.816 [2024-12-06 06:57:13.385323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:00.816 [2024-12-06 06:57:13.385335] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:00.816 [2024-12-06 06:57:13.385342] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:00.816 [2024-12-06 06:57:13.385350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:00.816 [2024-12-06 06:57:13.385357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:00.816 [2024-12-06 06:57:13.385364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:00.816 [2024-12-06 06:57:13.385370] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:00.816 [2024-12-06 06:57:13.385377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:00.816 [2024-12-06 06:57:13.385384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:00.816 [2024-12-06 06:57:13.385391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:00.816 [2024-12-06 06:57:13.385397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:00.816 [2024-12-06 06:57:13.385403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:00.816 [2024-12-06 06:57:13.385410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:00.816 [2024-12-06 06:57:13.385416] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:00.816 [2024-12-06 06:57:13.385423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:00.816 [2024-12-06 06:57:13.385429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:00.816 [2024-12-06 06:57:13.385436] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:00.816 [2024-12-06 06:57:13.385442] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:00.816 [2024-12-06 06:57:13.385449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:00.816 [2024-12-06 06:57:13.385455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:00.816 [2024-12-06 06:57:13.385614] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:00.816 [2024-12-06 06:57:13.385645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:00.816 [2024-12-06 06:57:13.385665] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:00.816 [2024-12-06 06:57:13.385684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:00.816 [2024-12-06 06:57:13.385702] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:00.816 [2024-12-06 06:57:13.385720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:00.816 [2024-12-06 06:57:13.385738] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:00.816 [2024-12-06 06:57:13.385756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:00.816 [2024-12-06 06:57:13.385774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:00.816 [2024-12-06 06:57:13.385792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:00.816 [2024-12-06 06:57:13.385809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:00.816 [2024-12-06 06:57:13.385827] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:00.816 [2024-12-06 06:57:13.385907] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:00.816 [2024-12-06 06:57:13.385926] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:00.816 [2024-12-06 06:57:13.385944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:00.816 [2024-12-06 06:57:13.385968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:00.816 [2024-12-06 06:57:13.386081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:00.816 [2024-12-06 06:57:13.386113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:00.816 [2024-12-06 06:57:13.386132] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:00.816 [2024-12-06 06:57:13.386152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:00.816 [2024-12-06 06:57:13.386171] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:00.816 [2024-12-06 06:57:13.386248] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:00.816 [2024-12-06 06:57:13.386283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:00.816 [2024-12-06 06:57:13.386319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:00.816 [2024-12-06 06:57:13.386383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:00.816 [2024-12-06 06:57:13.386413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:00.816 [2024-12-06 06:57:13.386441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:00.816 [2024-12-06 06:57:13.386506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:00.816 [2024-12-06 06:57:13.386537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:00.816 [2024-12-06 06:57:13.386566] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:00.816 [2024-12-06 06:57:13.386615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:00.816 [2024-12-06 06:57:13.386646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:00.816 [2024-12-06 06:57:13.386675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:00.816 [2024-12-06 06:57:13.386703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:00.816 [2024-12-06 06:57:13.386758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:00.816 [2024-12-06 06:57:13.386768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:00.816 [2024-12-06 06:57:13.386777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:00.816 [2024-12-06 06:57:13.386784] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:00.816 [2024-12-06 06:57:13.386793] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:00.816 [2024-12-06 06:57:13.386801] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:00.817 [2024-12-06 06:57:13.386808] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:00.817 [2024-12-06 06:57:13.386815] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:00.817 [2024-12-06 06:57:13.386822] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:00.817 [2024-12-06 06:57:13.386831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.817 [2024-12-06 06:57:13.386839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:00.817 [2024-12-06 06:57:13.386847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.666 ms 00:31:00.817 [2024-12-06 06:57:13.386855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.817 [2024-12-06 06:57:13.414833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.817 [2024-12-06 06:57:13.414877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:00.817 [2024-12-06 06:57:13.414889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.906 ms 00:31:00.817 [2024-12-06 06:57:13.414901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.817 [2024-12-06 06:57:13.414986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.817 [2024-12-06 06:57:13.414995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:00.817 [2024-12-06 06:57:13.415004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:31:00.817 [2024-12-06 06:57:13.415012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.817 [2024-12-06 06:57:13.464083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.817 [2024-12-06 06:57:13.464276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:00.817 [2024-12-06 06:57:13.464298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.015 ms 00:31:00.817 [2024-12-06 06:57:13.464308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.817 [2024-12-06 06:57:13.464358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.817 [2024-12-06 06:57:13.464368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:00.817 [2024-12-06 06:57:13.464383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:00.817 [2024-12-06 06:57:13.464391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.817 [2024-12-06 06:57:13.465000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.817 [2024-12-06 06:57:13.465034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:00.817 [2024-12-06 06:57:13.465044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.506 ms 00:31:00.817 [2024-12-06 06:57:13.465052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.817 [2024-12-06 06:57:13.465202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.817 [2024-12-06 06:57:13.465212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:00.817 [2024-12-06 06:57:13.465227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:31:00.817 [2024-12-06 06:57:13.465235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.817 [2024-12-06 06:57:13.480969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.817 [2024-12-06 06:57:13.481015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:00.817 [2024-12-06 06:57:13.481027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.715 ms 00:31:00.817 [2024-12-06 06:57:13.481035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.817 [2024-12-06 06:57:13.495173] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:31:00.817 [2024-12-06 06:57:13.495353] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:00.817 [2024-12-06 06:57:13.495372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.817 [2024-12-06 06:57:13.495381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:00.817 [2024-12-06 06:57:13.495402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.226 ms 00:31:00.817 [2024-12-06 06:57:13.495410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.817 [2024-12-06 06:57:13.521064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.817 [2024-12-06 06:57:13.521126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:00.817 [2024-12-06 06:57:13.521138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.462 ms 00:31:00.817 [2024-12-06 06:57:13.521147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.817 [2024-12-06 06:57:13.533940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.817 [2024-12-06 06:57:13.533985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:00.817 [2024-12-06 06:57:13.533996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.740 ms 00:31:00.817 [2024-12-06 06:57:13.534004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.817 [2024-12-06 06:57:13.546891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.817 [2024-12-06 06:57:13.546936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:00.817 [2024-12-06 06:57:13.546948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.842 ms 00:31:00.817 [2024-12-06 06:57:13.546955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.817 [2024-12-06 06:57:13.547640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.817 [2024-12-06 06:57:13.547666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:00.817 [2024-12-06 06:57:13.547680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.576 ms 00:31:00.817 [2024-12-06 06:57:13.547688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.079 [2024-12-06 06:57:13.614623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.079 [2024-12-06 06:57:13.614685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:01.079 [2024-12-06 06:57:13.614707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.915 ms 00:31:01.079 [2024-12-06 06:57:13.614717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.079 [2024-12-06 06:57:13.625780] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:01.079 [2024-12-06 06:57:13.628885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.079 [2024-12-06 06:57:13.629064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:01.079 [2024-12-06 06:57:13.629084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.110 ms 00:31:01.079 [2024-12-06 06:57:13.629094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.079 [2024-12-06 06:57:13.629181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.079 [2024-12-06 06:57:13.629193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:01.079 [2024-12-06 06:57:13.629205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:31:01.079 [2024-12-06 06:57:13.629213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.079 [2024-12-06 06:57:13.631089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.079 [2024-12-06 06:57:13.631134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:01.079 [2024-12-06 06:57:13.631144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.836 ms 00:31:01.079 [2024-12-06 06:57:13.631153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.079 [2024-12-06 06:57:13.631182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.079 [2024-12-06 06:57:13.631191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:01.079 [2024-12-06 06:57:13.631200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:01.079 [2024-12-06 06:57:13.631208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.079 [2024-12-06 06:57:13.631253] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:01.079 [2024-12-06 06:57:13.631265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.079 [2024-12-06 06:57:13.631273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:01.079 [2024-12-06 06:57:13.631283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:31:01.079 [2024-12-06 06:57:13.631291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.079 [2024-12-06 06:57:13.656424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.079 [2024-12-06 06:57:13.656621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:01.079 [2024-12-06 06:57:13.656650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.115 ms 00:31:01.079 [2024-12-06 06:57:13.656660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.079 [2024-12-06 06:57:13.656734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.079 [2024-12-06 06:57:13.656745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:01.079 [2024-12-06 06:57:13.656754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:31:01.079 [2024-12-06 06:57:13.656762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.079 [2024-12-06 06:57:13.658002] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 299.002 ms, result 0 00:31:02.466  [2024-12-06T06:57:16.150Z] Copying: 1320/1048576 [kB] (1320 kBps) [2024-12-06T06:57:17.087Z] Copying: 5212/1048576 [kB] (3892 kBps) [2024-12-06T06:57:18.019Z] Copying: 34/1024 [MB] (29 MBps) [2024-12-06T06:57:18.953Z] Copying: 87/1024 [MB] (52 MBps) [2024-12-06T06:57:19.888Z] Copying: 141/1024 [MB] (53 MBps) [2024-12-06T06:57:21.258Z] Copying: 195/1024 [MB] (54 MBps) [2024-12-06T06:57:22.190Z] Copying: 248/1024 [MB] (52 MBps) [2024-12-06T06:57:23.119Z] Copying: 301/1024 [MB] (52 MBps) [2024-12-06T06:57:24.050Z] Copying: 354/1024 [MB] (52 MBps) [2024-12-06T06:57:24.981Z] Copying: 406/1024 [MB] (52 MBps) [2024-12-06T06:57:25.909Z] Copying: 457/1024 [MB] (50 MBps) [2024-12-06T06:57:26.840Z] Copying: 509/1024 [MB] (52 MBps) [2024-12-06T06:57:28.216Z] Copying: 563/1024 [MB] (53 MBps) [2024-12-06T06:57:29.149Z] Copying: 617/1024 [MB] (54 MBps) [2024-12-06T06:57:30.084Z] Copying: 667/1024 [MB] (50 MBps) [2024-12-06T06:57:31.018Z] Copying: 718/1024 [MB] (50 MBps) [2024-12-06T06:57:31.952Z] Copying: 770/1024 [MB] (52 MBps) [2024-12-06T06:57:32.889Z] Copying: 822/1024 [MB] (52 MBps) [2024-12-06T06:57:34.266Z] Copying: 875/1024 [MB] (52 MBps) [2024-12-06T06:57:35.201Z] Copying: 929/1024 [MB] (54 MBps) [2024-12-06T06:57:35.765Z] Copying: 983/1024 [MB] (53 MBps) [2024-12-06T06:57:36.332Z] Copying: 1024/1024 [MB] (average 46 MBps)[2024-12-06 06:57:36.049804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.591 [2024-12-06 06:57:36.050047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:23.591 [2024-12-06 06:57:36.050262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:23.591 [2024-12-06 06:57:36.050300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.591 [2024-12-06 06:57:36.050372] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:23.591 [2024-12-06 06:57:36.057995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.591 [2024-12-06 06:57:36.058066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:23.591 [2024-12-06 06:57:36.058093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.580 ms 00:31:23.591 [2024-12-06 06:57:36.058115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.591 [2024-12-06 06:57:36.058768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.591 [2024-12-06 06:57:36.058818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:23.591 [2024-12-06 06:57:36.058855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.602 ms 00:31:23.591 [2024-12-06 06:57:36.058876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.591 [2024-12-06 06:57:36.070880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.591 [2024-12-06 06:57:36.070910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:23.591 [2024-12-06 06:57:36.070921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.966 ms 00:31:23.591 [2024-12-06 06:57:36.070928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.591 [2024-12-06 06:57:36.077071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.591 [2024-12-06 06:57:36.077183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:23.591 [2024-12-06 06:57:36.077197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.120 ms 00:31:23.591 [2024-12-06 06:57:36.077210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.591 [2024-12-06 06:57:36.100080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.591 [2024-12-06 06:57:36.100192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:23.591 [2024-12-06 06:57:36.100207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.817 ms 00:31:23.591 [2024-12-06 06:57:36.100214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.591 [2024-12-06 06:57:36.114344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.591 [2024-12-06 06:57:36.114376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:23.591 [2024-12-06 06:57:36.114387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.102 ms 00:31:23.591 [2024-12-06 06:57:36.114396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.591 [2024-12-06 06:57:36.116569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.591 [2024-12-06 06:57:36.116599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:23.591 [2024-12-06 06:57:36.116609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.152 ms 00:31:23.591 [2024-12-06 06:57:36.116616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.591 [2024-12-06 06:57:36.139411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.591 [2024-12-06 06:57:36.139546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:31:23.591 [2024-12-06 06:57:36.139561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.777 ms 00:31:23.591 [2024-12-06 06:57:36.139568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.591 [2024-12-06 06:57:36.161842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.591 [2024-12-06 06:57:36.161872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:31:23.591 [2024-12-06 06:57:36.161882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.246 ms 00:31:23.591 [2024-12-06 06:57:36.161889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.591 [2024-12-06 06:57:36.183643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.591 [2024-12-06 06:57:36.183671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:23.591 [2024-12-06 06:57:36.183680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.724 ms 00:31:23.591 [2024-12-06 06:57:36.183688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.591 [2024-12-06 06:57:36.205246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.591 [2024-12-06 06:57:36.205274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:23.591 [2024-12-06 06:57:36.205283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.508 ms 00:31:23.591 [2024-12-06 06:57:36.205290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.591 [2024-12-06 06:57:36.205319] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:23.591 [2024-12-06 06:57:36.205333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:31:23.591 [2024-12-06 06:57:36.205343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:31:23.591 [2024-12-06 06:57:36.205351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:23.591 [2024-12-06 06:57:36.205359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:23.591 [2024-12-06 06:57:36.205366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:23.591 [2024-12-06 06:57:36.205374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.205993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.206000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.206007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.206015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.206022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.206029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.206036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.206044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:23.592 [2024-12-06 06:57:36.206051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:23.593 [2024-12-06 06:57:36.206058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:23.593 [2024-12-06 06:57:36.206065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:23.593 [2024-12-06 06:57:36.206073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:23.593 [2024-12-06 06:57:36.206080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:23.593 [2024-12-06 06:57:36.206088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:23.593 [2024-12-06 06:57:36.206103] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:23.593 [2024-12-06 06:57:36.206110] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1418b45b-9351-478f-9d9d-979a1b5eff85 00:31:23.593 [2024-12-06 06:57:36.206118] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:31:23.593 [2024-12-06 06:57:36.206126] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 138688 00:31:23.593 [2024-12-06 06:57:36.206133] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 136704 00:31:23.593 [2024-12-06 06:57:36.206144] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0145 00:31:23.593 [2024-12-06 06:57:36.206151] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:23.593 [2024-12-06 06:57:36.206164] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:23.593 [2024-12-06 06:57:36.206171] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:23.593 [2024-12-06 06:57:36.206178] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:23.593 [2024-12-06 06:57:36.206184] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:23.593 [2024-12-06 06:57:36.206191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.593 [2024-12-06 06:57:36.206198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:23.593 [2024-12-06 06:57:36.206206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.872 ms 00:31:23.593 [2024-12-06 06:57:36.206213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.593 [2024-12-06 06:57:36.218256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.593 [2024-12-06 06:57:36.218285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:23.593 [2024-12-06 06:57:36.218295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.028 ms 00:31:23.593 [2024-12-06 06:57:36.218302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.593 [2024-12-06 06:57:36.218646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.593 [2024-12-06 06:57:36.218689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:23.593 [2024-12-06 06:57:36.218700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.329 ms 00:31:23.593 [2024-12-06 06:57:36.218708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.593 [2024-12-06 06:57:36.251006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:23.593 [2024-12-06 06:57:36.251037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:23.593 [2024-12-06 06:57:36.251048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:23.593 [2024-12-06 06:57:36.251055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.593 [2024-12-06 06:57:36.251105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:23.593 [2024-12-06 06:57:36.251113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:23.593 [2024-12-06 06:57:36.251121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:23.593 [2024-12-06 06:57:36.251128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.593 [2024-12-06 06:57:36.251181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:23.593 [2024-12-06 06:57:36.251193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:23.593 [2024-12-06 06:57:36.251201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:23.593 [2024-12-06 06:57:36.251208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.593 [2024-12-06 06:57:36.251222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:23.593 [2024-12-06 06:57:36.251230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:23.593 [2024-12-06 06:57:36.251237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:23.593 [2024-12-06 06:57:36.251244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.593 [2024-12-06 06:57:36.326817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:23.593 [2024-12-06 06:57:36.326860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:23.593 [2024-12-06 06:57:36.326871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:23.593 [2024-12-06 06:57:36.326878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.851 [2024-12-06 06:57:36.389496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:23.851 [2024-12-06 06:57:36.389653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:23.851 [2024-12-06 06:57:36.389668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:23.851 [2024-12-06 06:57:36.389676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.851 [2024-12-06 06:57:36.389740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:23.851 [2024-12-06 06:57:36.389748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:23.851 [2024-12-06 06:57:36.389760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:23.851 [2024-12-06 06:57:36.389767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.851 [2024-12-06 06:57:36.389801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:23.851 [2024-12-06 06:57:36.389809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:23.851 [2024-12-06 06:57:36.389817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:23.851 [2024-12-06 06:57:36.389825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.851 [2024-12-06 06:57:36.389909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:23.851 [2024-12-06 06:57:36.389918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:23.851 [2024-12-06 06:57:36.389926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:23.851 [2024-12-06 06:57:36.389936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.851 [2024-12-06 06:57:36.389963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:23.851 [2024-12-06 06:57:36.389971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:23.851 [2024-12-06 06:57:36.389979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:23.851 [2024-12-06 06:57:36.389987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.851 [2024-12-06 06:57:36.390017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:23.851 [2024-12-06 06:57:36.390025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:23.851 [2024-12-06 06:57:36.390033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:23.851 [2024-12-06 06:57:36.390042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.851 [2024-12-06 06:57:36.390078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:23.851 [2024-12-06 06:57:36.390087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:23.851 [2024-12-06 06:57:36.390095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:23.851 [2024-12-06 06:57:36.390102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.851 [2024-12-06 06:57:36.390210] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 340.380 ms, result 0 00:31:25.753 00:31:25.753 00:31:25.753 06:57:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:27.717 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:31:27.717 06:57:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:27.717 [2024-12-06 06:57:40.051677] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:31:27.717 [2024-12-06 06:57:40.051773] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80501 ] 00:31:27.717 [2024-12-06 06:57:40.208987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:27.717 [2024-12-06 06:57:40.305591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:27.975 [2024-12-06 06:57:40.561867] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:27.975 [2024-12-06 06:57:40.561931] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:28.235 [2024-12-06 06:57:40.715626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.235 [2024-12-06 06:57:40.715674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:28.235 [2024-12-06 06:57:40.715687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:28.235 [2024-12-06 06:57:40.715695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.235 [2024-12-06 06:57:40.715736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.235 [2024-12-06 06:57:40.715747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:28.235 [2024-12-06 06:57:40.715756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:31:28.235 [2024-12-06 06:57:40.715763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.235 [2024-12-06 06:57:40.715779] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:28.235 [2024-12-06 06:57:40.716425] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:28.235 [2024-12-06 06:57:40.716441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.235 [2024-12-06 06:57:40.716449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:28.235 [2024-12-06 06:57:40.716457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.666 ms 00:31:28.235 [2024-12-06 06:57:40.716483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.235 [2024-12-06 06:57:40.717600] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:31:28.235 [2024-12-06 06:57:40.729903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.235 [2024-12-06 06:57:40.729945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:28.235 [2024-12-06 06:57:40.729957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.305 ms 00:31:28.235 [2024-12-06 06:57:40.729965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.235 [2024-12-06 06:57:40.730016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.235 [2024-12-06 06:57:40.730026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:28.235 [2024-12-06 06:57:40.730033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:31:28.235 [2024-12-06 06:57:40.730040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.235 [2024-12-06 06:57:40.734836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.235 [2024-12-06 06:57:40.734865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:28.235 [2024-12-06 06:57:40.734874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.738 ms 00:31:28.235 [2024-12-06 06:57:40.734885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.235 [2024-12-06 06:57:40.734950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.235 [2024-12-06 06:57:40.734959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:28.235 [2024-12-06 06:57:40.734966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:31:28.235 [2024-12-06 06:57:40.734973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.235 [2024-12-06 06:57:40.735013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.235 [2024-12-06 06:57:40.735023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:28.235 [2024-12-06 06:57:40.735031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:28.235 [2024-12-06 06:57:40.735038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.235 [2024-12-06 06:57:40.735061] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:28.235 [2024-12-06 06:57:40.738398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.235 [2024-12-06 06:57:40.738424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:28.235 [2024-12-06 06:57:40.738436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.341 ms 00:31:28.235 [2024-12-06 06:57:40.738443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.235 [2024-12-06 06:57:40.738487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.235 [2024-12-06 06:57:40.738496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:28.235 [2024-12-06 06:57:40.738504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:31:28.235 [2024-12-06 06:57:40.738511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.235 [2024-12-06 06:57:40.738529] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:28.235 [2024-12-06 06:57:40.738547] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:28.235 [2024-12-06 06:57:40.738581] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:28.235 [2024-12-06 06:57:40.738597] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:28.235 [2024-12-06 06:57:40.738697] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:28.235 [2024-12-06 06:57:40.738707] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:28.235 [2024-12-06 06:57:40.738717] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:28.235 [2024-12-06 06:57:40.738727] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:28.235 [2024-12-06 06:57:40.738735] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:28.235 [2024-12-06 06:57:40.738743] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:28.235 [2024-12-06 06:57:40.738750] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:28.236 [2024-12-06 06:57:40.738759] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:28.236 [2024-12-06 06:57:40.738766] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:28.236 [2024-12-06 06:57:40.738773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.236 [2024-12-06 06:57:40.738781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:28.236 [2024-12-06 06:57:40.738788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.246 ms 00:31:28.236 [2024-12-06 06:57:40.738795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.236 [2024-12-06 06:57:40.738877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.236 [2024-12-06 06:57:40.738885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:28.236 [2024-12-06 06:57:40.738892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:31:28.236 [2024-12-06 06:57:40.738899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.236 [2024-12-06 06:57:40.739012] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:28.236 [2024-12-06 06:57:40.739023] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:28.236 [2024-12-06 06:57:40.739030] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:28.236 [2024-12-06 06:57:40.739038] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:28.236 [2024-12-06 06:57:40.739045] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:28.236 [2024-12-06 06:57:40.739052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:28.236 [2024-12-06 06:57:40.739058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:28.236 [2024-12-06 06:57:40.739066] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:28.236 [2024-12-06 06:57:40.739073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:28.236 [2024-12-06 06:57:40.739079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:28.236 [2024-12-06 06:57:40.739086] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:28.236 [2024-12-06 06:57:40.739094] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:28.236 [2024-12-06 06:57:40.739100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:28.236 [2024-12-06 06:57:40.739112] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:28.236 [2024-12-06 06:57:40.739118] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:28.236 [2024-12-06 06:57:40.739125] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:28.236 [2024-12-06 06:57:40.739131] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:28.236 [2024-12-06 06:57:40.739139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:28.236 [2024-12-06 06:57:40.739145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:28.236 [2024-12-06 06:57:40.739152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:28.236 [2024-12-06 06:57:40.739158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:28.236 [2024-12-06 06:57:40.739165] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:28.236 [2024-12-06 06:57:40.739172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:28.236 [2024-12-06 06:57:40.739178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:28.236 [2024-12-06 06:57:40.739185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:28.236 [2024-12-06 06:57:40.739191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:28.236 [2024-12-06 06:57:40.739197] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:28.236 [2024-12-06 06:57:40.739204] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:28.236 [2024-12-06 06:57:40.739210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:28.236 [2024-12-06 06:57:40.739216] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:28.236 [2024-12-06 06:57:40.739222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:28.236 [2024-12-06 06:57:40.739229] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:28.236 [2024-12-06 06:57:40.739235] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:28.236 [2024-12-06 06:57:40.739241] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:28.236 [2024-12-06 06:57:40.739247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:28.236 [2024-12-06 06:57:40.739253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:28.236 [2024-12-06 06:57:40.739259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:28.236 [2024-12-06 06:57:40.739266] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:28.236 [2024-12-06 06:57:40.739273] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:28.236 [2024-12-06 06:57:40.739279] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:28.236 [2024-12-06 06:57:40.739285] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:28.236 [2024-12-06 06:57:40.739291] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:28.236 [2024-12-06 06:57:40.739298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:28.236 [2024-12-06 06:57:40.739305] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:28.236 [2024-12-06 06:57:40.739312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:28.236 [2024-12-06 06:57:40.739319] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:28.236 [2024-12-06 06:57:40.739326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:28.236 [2024-12-06 06:57:40.739333] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:28.236 [2024-12-06 06:57:40.739339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:28.236 [2024-12-06 06:57:40.739348] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:28.236 [2024-12-06 06:57:40.739355] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:28.236 [2024-12-06 06:57:40.739362] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:28.236 [2024-12-06 06:57:40.739368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:28.236 [2024-12-06 06:57:40.739376] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:28.236 [2024-12-06 06:57:40.739384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:28.236 [2024-12-06 06:57:40.739403] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:28.236 [2024-12-06 06:57:40.739416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:28.236 [2024-12-06 06:57:40.739426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:28.236 [2024-12-06 06:57:40.739434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:28.236 [2024-12-06 06:57:40.739441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:28.236 [2024-12-06 06:57:40.739448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:28.236 [2024-12-06 06:57:40.739455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:28.236 [2024-12-06 06:57:40.739473] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:28.236 [2024-12-06 06:57:40.739481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:28.236 [2024-12-06 06:57:40.739488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:28.236 [2024-12-06 06:57:40.739495] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:28.236 [2024-12-06 06:57:40.739502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:28.236 [2024-12-06 06:57:40.739509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:28.236 [2024-12-06 06:57:40.739516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:28.236 [2024-12-06 06:57:40.739523] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:28.236 [2024-12-06 06:57:40.739532] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:28.236 [2024-12-06 06:57:40.739540] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:28.236 [2024-12-06 06:57:40.739548] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:28.236 [2024-12-06 06:57:40.739555] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:28.236 [2024-12-06 06:57:40.739562] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:28.236 [2024-12-06 06:57:40.739569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.236 [2024-12-06 06:57:40.739576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:28.236 [2024-12-06 06:57:40.739585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.626 ms 00:31:28.236 [2024-12-06 06:57:40.739591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.236 [2024-12-06 06:57:40.765073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.236 [2024-12-06 06:57:40.765209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:28.236 [2024-12-06 06:57:40.765226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.435 ms 00:31:28.236 [2024-12-06 06:57:40.765238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.236 [2024-12-06 06:57:40.765327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.236 [2024-12-06 06:57:40.765337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:28.236 [2024-12-06 06:57:40.765346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:31:28.236 [2024-12-06 06:57:40.765353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.236 [2024-12-06 06:57:40.813643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.236 [2024-12-06 06:57:40.813789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:28.237 [2024-12-06 06:57:40.813809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.238 ms 00:31:28.237 [2024-12-06 06:57:40.813817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.237 [2024-12-06 06:57:40.813859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.237 [2024-12-06 06:57:40.813871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:28.237 [2024-12-06 06:57:40.813889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:28.237 [2024-12-06 06:57:40.813903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.237 [2024-12-06 06:57:40.814270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.237 [2024-12-06 06:57:40.814286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:28.237 [2024-12-06 06:57:40.814295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:31:28.237 [2024-12-06 06:57:40.814303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.237 [2024-12-06 06:57:40.814426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.237 [2024-12-06 06:57:40.814435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:28.237 [2024-12-06 06:57:40.814447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:31:28.237 [2024-12-06 06:57:40.814455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.237 [2024-12-06 06:57:40.827444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.237 [2024-12-06 06:57:40.827492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:28.237 [2024-12-06 06:57:40.827502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.951 ms 00:31:28.237 [2024-12-06 06:57:40.827510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.237 [2024-12-06 06:57:40.840006] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:31:28.237 [2024-12-06 06:57:40.840038] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:28.237 [2024-12-06 06:57:40.840049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.237 [2024-12-06 06:57:40.840057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:28.237 [2024-12-06 06:57:40.840066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.451 ms 00:31:28.237 [2024-12-06 06:57:40.840073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.237 [2024-12-06 06:57:40.864565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.237 [2024-12-06 06:57:40.864701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:28.237 [2024-12-06 06:57:40.864718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.454 ms 00:31:28.237 [2024-12-06 06:57:40.864726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.237 [2024-12-06 06:57:40.876304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.237 [2024-12-06 06:57:40.876422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:28.237 [2024-12-06 06:57:40.876437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.532 ms 00:31:28.237 [2024-12-06 06:57:40.876444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.237 [2024-12-06 06:57:40.887488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.237 [2024-12-06 06:57:40.887519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:28.237 [2024-12-06 06:57:40.887529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.999 ms 00:31:28.237 [2024-12-06 06:57:40.887535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.237 [2024-12-06 06:57:40.888157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.237 [2024-12-06 06:57:40.888189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:28.237 [2024-12-06 06:57:40.888200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.541 ms 00:31:28.237 [2024-12-06 06:57:40.888208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.237 [2024-12-06 06:57:40.943243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.237 [2024-12-06 06:57:40.943407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:28.237 [2024-12-06 06:57:40.943439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.017 ms 00:31:28.237 [2024-12-06 06:57:40.943452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.237 [2024-12-06 06:57:40.953876] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:28.237 [2024-12-06 06:57:40.956456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.237 [2024-12-06 06:57:40.956493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:28.237 [2024-12-06 06:57:40.956505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.940 ms 00:31:28.237 [2024-12-06 06:57:40.956514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.237 [2024-12-06 06:57:40.956609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.237 [2024-12-06 06:57:40.956620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:28.237 [2024-12-06 06:57:40.956631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:31:28.237 [2024-12-06 06:57:40.956638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.237 [2024-12-06 06:57:40.957193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.237 [2024-12-06 06:57:40.957252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:28.237 [2024-12-06 06:57:40.957267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.517 ms 00:31:28.237 [2024-12-06 06:57:40.957279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.237 [2024-12-06 06:57:40.957310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.237 [2024-12-06 06:57:40.957320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:28.237 [2024-12-06 06:57:40.957328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:28.237 [2024-12-06 06:57:40.957335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.237 [2024-12-06 06:57:40.957376] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:28.237 [2024-12-06 06:57:40.957388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.237 [2024-12-06 06:57:40.957396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:28.237 [2024-12-06 06:57:40.957403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:31:28.237 [2024-12-06 06:57:40.957412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.494 [2024-12-06 06:57:40.979998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.495 [2024-12-06 06:57:40.980121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:28.495 [2024-12-06 06:57:40.980142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.561 ms 00:31:28.495 [2024-12-06 06:57:40.980149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.495 [2024-12-06 06:57:40.980212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.495 [2024-12-06 06:57:40.980226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:28.495 [2024-12-06 06:57:40.980238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:31:28.495 [2024-12-06 06:57:40.980251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.495 [2024-12-06 06:57:40.981704] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 265.650 ms, result 0 00:31:29.426  [2024-12-06T06:57:43.542Z] Copying: 46/1024 [MB] (46 MBps) [2024-12-06T06:57:44.475Z] Copying: 92/1024 [MB] (45 MBps) [2024-12-06T06:57:45.407Z] Copying: 140/1024 [MB] (48 MBps) [2024-12-06T06:57:46.339Z] Copying: 188/1024 [MB] (47 MBps) [2024-12-06T06:57:47.270Z] Copying: 233/1024 [MB] (44 MBps) [2024-12-06T06:57:48.202Z] Copying: 281/1024 [MB] (48 MBps) [2024-12-06T06:57:49.573Z] Copying: 328/1024 [MB] (46 MBps) [2024-12-06T06:57:50.504Z] Copying: 376/1024 [MB] (47 MBps) [2024-12-06T06:57:51.484Z] Copying: 421/1024 [MB] (45 MBps) [2024-12-06T06:57:52.418Z] Copying: 469/1024 [MB] (48 MBps) [2024-12-06T06:57:53.353Z] Copying: 518/1024 [MB] (48 MBps) [2024-12-06T06:57:54.312Z] Copying: 565/1024 [MB] (47 MBps) [2024-12-06T06:57:55.251Z] Copying: 609/1024 [MB] (43 MBps) [2024-12-06T06:57:56.194Z] Copying: 650/1024 [MB] (41 MBps) [2024-12-06T06:57:57.574Z] Copying: 689/1024 [MB] (39 MBps) [2024-12-06T06:57:58.512Z] Copying: 728/1024 [MB] (38 MBps) [2024-12-06T06:57:59.452Z] Copying: 767/1024 [MB] (39 MBps) [2024-12-06T06:58:00.385Z] Copying: 809/1024 [MB] (41 MBps) [2024-12-06T06:58:01.321Z] Copying: 857/1024 [MB] (47 MBps) [2024-12-06T06:58:02.255Z] Copying: 905/1024 [MB] (47 MBps) [2024-12-06T06:58:03.368Z] Copying: 953/1024 [MB] (47 MBps) [2024-12-06T06:58:03.970Z] Copying: 995/1024 [MB] (41 MBps) [2024-12-06T06:58:04.234Z] Copying: 1024/1024 [MB] (average 45 MBps)[2024-12-06 06:58:04.003844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.493 [2024-12-06 06:58:04.003907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:51.493 [2024-12-06 06:58:04.003923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:51.493 [2024-12-06 06:58:04.003933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.493 [2024-12-06 06:58:04.003957] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:51.493 [2024-12-06 06:58:04.006941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.493 [2024-12-06 06:58:04.006973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:51.493 [2024-12-06 06:58:04.006985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.969 ms 00:31:51.493 [2024-12-06 06:58:04.006994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.493 [2024-12-06 06:58:04.007243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.493 [2024-12-06 06:58:04.007253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:51.493 [2024-12-06 06:58:04.007263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.227 ms 00:31:51.493 [2024-12-06 06:58:04.007272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.493 [2024-12-06 06:58:04.011696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.493 [2024-12-06 06:58:04.011715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:51.493 [2024-12-06 06:58:04.011726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.409 ms 00:31:51.493 [2024-12-06 06:58:04.011740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.493 [2024-12-06 06:58:04.018209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.493 [2024-12-06 06:58:04.018320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:51.493 [2024-12-06 06:58:04.018380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.451 ms 00:31:51.493 [2024-12-06 06:58:04.018403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.493 [2024-12-06 06:58:04.041826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.493 [2024-12-06 06:58:04.041937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:51.493 [2024-12-06 06:58:04.041993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.344 ms 00:31:51.493 [2024-12-06 06:58:04.042016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.493 [2024-12-06 06:58:04.055316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.493 [2024-12-06 06:58:04.055429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:51.493 [2024-12-06 06:58:04.055523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.260 ms 00:31:51.493 [2024-12-06 06:58:04.055549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.493 [2024-12-06 06:58:04.057649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.493 [2024-12-06 06:58:04.057740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:51.493 [2024-12-06 06:58:04.057794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.062 ms 00:31:51.493 [2024-12-06 06:58:04.057817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.493 [2024-12-06 06:58:04.080224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.493 [2024-12-06 06:58:04.080344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:31:51.493 [2024-12-06 06:58:04.080391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.380 ms 00:31:51.493 [2024-12-06 06:58:04.080411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.493 [2024-12-06 06:58:04.102640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.493 [2024-12-06 06:58:04.102747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:31:51.493 [2024-12-06 06:58:04.102794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.189 ms 00:31:51.493 [2024-12-06 06:58:04.102816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.493 [2024-12-06 06:58:04.124744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.493 [2024-12-06 06:58:04.124835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:51.493 [2024-12-06 06:58:04.124937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.891 ms 00:31:51.493 [2024-12-06 06:58:04.124965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.493 [2024-12-06 06:58:04.146889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.493 [2024-12-06 06:58:04.146992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:51.493 [2024-12-06 06:58:04.147043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.860 ms 00:31:51.493 [2024-12-06 06:58:04.147065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.493 [2024-12-06 06:58:04.147101] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:51.493 [2024-12-06 06:58:04.147156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:31:51.493 [2024-12-06 06:58:04.147195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:31:51.493 [2024-12-06 06:58:04.147248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:51.493 [2024-12-06 06:58:04.147280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:51.493 [2024-12-06 06:58:04.147328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:51.493 [2024-12-06 06:58:04.147359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:51.493 [2024-12-06 06:58:04.147388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:51.493 [2024-12-06 06:58:04.147447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:51.493 [2024-12-06 06:58:04.147499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:51.493 [2024-12-06 06:58:04.147896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:51.493 [2024-12-06 06:58:04.147915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:51.493 [2024-12-06 06:58:04.147925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:51.493 [2024-12-06 06:58:04.147932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:51.493 [2024-12-06 06:58:04.147940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:51.493 [2024-12-06 06:58:04.147948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:51.493 [2024-12-06 06:58:04.147955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:51.493 [2024-12-06 06:58:04.147963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:51.493 [2024-12-06 06:58:04.147971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:51.493 [2024-12-06 06:58:04.147978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:51.493 [2024-12-06 06:58:04.147985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:51.493 [2024-12-06 06:58:04.147993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:51.493 [2024-12-06 06:58:04.148000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:51.493 [2024-12-06 06:58:04.148007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:51.493 [2024-12-06 06:58:04.148014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:51.493 [2024-12-06 06:58:04.148022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:51.493 [2024-12-06 06:58:04.148029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:51.493 [2024-12-06 06:58:04.148037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.148044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.148051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.148059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.148066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.148074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.148081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.148088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.148096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.148103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.148110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.148117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.148125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.148133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.148141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.148148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.148155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.148162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.148170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.148177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.148185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.148192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.148201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.148208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.148216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.148223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.148231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.148239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.148246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.148253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.148261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.148268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.148276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.148283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.148291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.148298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.148305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.148313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.148320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.148328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.148335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.148342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.148350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.148358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.148365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.148372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.148379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.148387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.148394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.148401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.148408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.148415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.148423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.148430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.148438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.148446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.148453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.148616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.148658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.148687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.148754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.148784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.148812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.148870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.148900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.148928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.148957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.148985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.149040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.149071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.149100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.149128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.149183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.149192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:51.494 [2024-12-06 06:58:04.149208] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:51.494 [2024-12-06 06:58:04.149216] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1418b45b-9351-478f-9d9d-979a1b5eff85 00:31:51.494 [2024-12-06 06:58:04.149224] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:31:51.494 [2024-12-06 06:58:04.149231] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:31:51.494 [2024-12-06 06:58:04.149237] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:31:51.494 [2024-12-06 06:58:04.149245] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:31:51.494 [2024-12-06 06:58:04.149260] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:51.494 [2024-12-06 06:58:04.149268] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:51.494 [2024-12-06 06:58:04.149276] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:51.494 [2024-12-06 06:58:04.149282] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:51.494 [2024-12-06 06:58:04.149289] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:51.494 [2024-12-06 06:58:04.149298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.494 [2024-12-06 06:58:04.149312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:51.494 [2024-12-06 06:58:04.149322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.197 ms 00:31:51.494 [2024-12-06 06:58:04.149332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.494 [2024-12-06 06:58:04.162589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.494 [2024-12-06 06:58:04.162619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:51.494 [2024-12-06 06:58:04.162630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.232 ms 00:31:51.494 [2024-12-06 06:58:04.162639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.494 [2024-12-06 06:58:04.162998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:51.495 [2024-12-06 06:58:04.163020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:51.495 [2024-12-06 06:58:04.163029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:31:51.495 [2024-12-06 06:58:04.163036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.495 [2024-12-06 06:58:04.195323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:51.495 [2024-12-06 06:58:04.195356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:51.495 [2024-12-06 06:58:04.195367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:51.495 [2024-12-06 06:58:04.195375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.495 [2024-12-06 06:58:04.195436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:51.495 [2024-12-06 06:58:04.195448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:51.495 [2024-12-06 06:58:04.195456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:51.495 [2024-12-06 06:58:04.195475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.495 [2024-12-06 06:58:04.195532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:51.495 [2024-12-06 06:58:04.195541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:51.495 [2024-12-06 06:58:04.195549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:51.495 [2024-12-06 06:58:04.195557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.495 [2024-12-06 06:58:04.195571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:51.495 [2024-12-06 06:58:04.195578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:51.495 [2024-12-06 06:58:04.195589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:51.495 [2024-12-06 06:58:04.195596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.758 [2024-12-06 06:58:04.270755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:51.758 [2024-12-06 06:58:04.270882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:51.758 [2024-12-06 06:58:04.270899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:51.758 [2024-12-06 06:58:04.270907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.758 [2024-12-06 06:58:04.334547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:51.758 [2024-12-06 06:58:04.334595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:51.758 [2024-12-06 06:58:04.334607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:51.758 [2024-12-06 06:58:04.334615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.758 [2024-12-06 06:58:04.334690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:51.758 [2024-12-06 06:58:04.334700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:51.758 [2024-12-06 06:58:04.334709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:51.758 [2024-12-06 06:58:04.334716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.758 [2024-12-06 06:58:04.334750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:51.758 [2024-12-06 06:58:04.334758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:51.758 [2024-12-06 06:58:04.334766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:51.758 [2024-12-06 06:58:04.334777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.758 [2024-12-06 06:58:04.334860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:51.758 [2024-12-06 06:58:04.334869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:51.758 [2024-12-06 06:58:04.334877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:51.758 [2024-12-06 06:58:04.334884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.758 [2024-12-06 06:58:04.334918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:51.758 [2024-12-06 06:58:04.334931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:51.758 [2024-12-06 06:58:04.334943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:51.758 [2024-12-06 06:58:04.334950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.758 [2024-12-06 06:58:04.334987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:51.758 [2024-12-06 06:58:04.334995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:51.758 [2024-12-06 06:58:04.335003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:51.758 [2024-12-06 06:58:04.335010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.758 [2024-12-06 06:58:04.335049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:51.758 [2024-12-06 06:58:04.335058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:51.758 [2024-12-06 06:58:04.335066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:51.758 [2024-12-06 06:58:04.335075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:51.758 [2024-12-06 06:58:04.335181] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 331.315 ms, result 0 00:31:52.330 00:31:52.330 00:31:52.330 06:58:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:31:54.884 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:31:54.884 06:58:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:31:54.884 06:58:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:31:54.884 06:58:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:54.884 06:58:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:31:54.884 06:58:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:31:54.884 06:58:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:54.884 06:58:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:31:54.884 06:58:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 79185 00:31:54.884 06:58:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 79185 ']' 00:31:54.884 06:58:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 79185 00:31:54.884 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (79185) - No such process 00:31:54.884 Process with pid 79185 is not found 00:31:54.884 06:58:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 79185 is not found' 00:31:54.884 06:58:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:31:55.146 Remove shared memory files 00:31:55.146 06:58:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:31:55.146 06:58:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:55.146 06:58:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:31:55.146 06:58:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:31:55.146 06:58:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:31:55.146 06:58:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:55.146 06:58:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:31:55.146 ************************************ 00:31:55.146 END TEST ftl_dirty_shutdown 00:31:55.146 ************************************ 00:31:55.146 00:31:55.146 real 2m29.185s 00:31:55.146 user 2m51.327s 00:31:55.146 sys 0m24.333s 00:31:55.146 06:58:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:55.146 06:58:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:55.146 06:58:07 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:31:55.146 06:58:07 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:55.146 06:58:07 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:55.146 06:58:07 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:55.146 ************************************ 00:31:55.146 START TEST ftl_upgrade_shutdown 00:31:55.146 ************************************ 00:31:55.146 06:58:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:31:55.146 * Looking for test storage... 00:31:55.146 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:31:55.146 06:58:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:55.146 06:58:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:31:55.146 06:58:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:55.408 06:58:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:55.408 06:58:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:55.408 06:58:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:55.408 06:58:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:55.408 06:58:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:31:55.408 06:58:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:31:55.408 06:58:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:31:55.408 06:58:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:31:55.408 06:58:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:31:55.408 06:58:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:31:55.408 06:58:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:31:55.408 06:58:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:55.408 06:58:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:31:55.408 06:58:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:31:55.408 06:58:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:55.408 06:58:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:55.408 06:58:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:31:55.408 06:58:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:31:55.408 06:58:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:55.408 06:58:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:31:55.408 06:58:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:31:55.408 06:58:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:31:55.408 06:58:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:31:55.408 06:58:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:55.408 06:58:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:31:55.408 06:58:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:31:55.408 06:58:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:55.408 06:58:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:55.408 06:58:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:31:55.408 06:58:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:55.408 06:58:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:55.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:55.408 --rc genhtml_branch_coverage=1 00:31:55.408 --rc genhtml_function_coverage=1 00:31:55.408 --rc genhtml_legend=1 00:31:55.408 --rc geninfo_all_blocks=1 00:31:55.408 --rc geninfo_unexecuted_blocks=1 00:31:55.408 00:31:55.408 ' 00:31:55.408 06:58:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:55.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:55.408 --rc genhtml_branch_coverage=1 00:31:55.408 --rc genhtml_function_coverage=1 00:31:55.408 --rc genhtml_legend=1 00:31:55.408 --rc geninfo_all_blocks=1 00:31:55.408 --rc geninfo_unexecuted_blocks=1 00:31:55.408 00:31:55.408 ' 00:31:55.408 06:58:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:55.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:55.408 --rc genhtml_branch_coverage=1 00:31:55.408 --rc genhtml_function_coverage=1 00:31:55.408 --rc genhtml_legend=1 00:31:55.408 --rc geninfo_all_blocks=1 00:31:55.408 --rc geninfo_unexecuted_blocks=1 00:31:55.408 00:31:55.408 ' 00:31:55.408 06:58:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:55.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:55.408 --rc genhtml_branch_coverage=1 00:31:55.409 --rc genhtml_function_coverage=1 00:31:55.409 --rc genhtml_legend=1 00:31:55.409 --rc geninfo_all_blocks=1 00:31:55.409 --rc geninfo_unexecuted_blocks=1 00:31:55.409 00:31:55.409 ' 00:31:55.409 06:58:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:31:55.409 06:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:31:55.409 06:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:31:55.409 06:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:31:55.409 06:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:31:55.409 06:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:31:55.409 06:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:55.409 06:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:31:55.409 06:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:31:55.409 06:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:55.409 06:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:55.409 06:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:31:55.409 06:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:31:55.409 06:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:55.409 06:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:55.409 06:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:31:55.409 06:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:31:55.409 06:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:55.409 06:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:55.409 06:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:31:55.409 06:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:31:55.409 06:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:55.409 06:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:55.409 06:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:55.409 06:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:55.409 06:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:31:55.409 06:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:31:55.409 06:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:55.409 06:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:55.409 06:58:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:55.409 06:58:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:31:55.409 06:58:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:31:55.409 06:58:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:31:55.409 06:58:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:31:55.409 06:58:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:31:55.409 06:58:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:31:55.409 06:58:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:31:55.409 06:58:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:31:55.409 06:58:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:31:55.409 06:58:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:31:55.409 06:58:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:31:55.409 06:58:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:31:55.409 06:58:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:31:55.409 06:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:31:55.409 06:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:31:55.409 06:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:55.409 06:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=80863 00:31:55.409 06:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:31:55.409 06:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 80863 00:31:55.409 06:58:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 80863 ']' 00:31:55.409 06:58:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:55.409 06:58:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:55.409 06:58:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:55.409 06:58:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:31:55.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:55.409 06:58:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:55.409 06:58:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:55.409 [2024-12-06 06:58:08.022699] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:31:55.409 [2024-12-06 06:58:08.023059] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80863 ] 00:31:55.671 [2024-12-06 06:58:08.188143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:55.671 [2024-12-06 06:58:08.314976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:56.611 06:58:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:56.611 06:58:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:31:56.611 06:58:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:56.611 06:58:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:31:56.611 06:58:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:31:56.611 06:58:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:31:56.611 06:58:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:31:56.611 06:58:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:31:56.611 06:58:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:31:56.611 06:58:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:31:56.611 06:58:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:31:56.611 06:58:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:31:56.611 06:58:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:31:56.611 06:58:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:31:56.611 06:58:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:31:56.611 06:58:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:31:56.611 06:58:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:31:56.611 06:58:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:31:56.611 06:58:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:31:56.611 06:58:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:31:56.611 06:58:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:31:56.611 06:58:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:31:56.611 06:58:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:31:56.872 06:58:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:31:56.872 06:58:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:31:56.872 06:58:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:31:56.872 06:58:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:31:56.872 06:58:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:56.872 06:58:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:31:56.872 06:58:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:31:56.872 06:58:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:31:56.872 06:58:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:56.872 { 00:31:56.872 "name": "basen1", 00:31:56.872 "aliases": [ 00:31:56.872 "801f942b-a220-4eb4-840e-90b661e21cfa" 00:31:56.872 ], 00:31:56.872 "product_name": "NVMe disk", 00:31:56.872 "block_size": 4096, 00:31:56.872 "num_blocks": 1310720, 00:31:56.872 "uuid": "801f942b-a220-4eb4-840e-90b661e21cfa", 00:31:56.872 "numa_id": -1, 00:31:56.872 "assigned_rate_limits": { 00:31:56.872 "rw_ios_per_sec": 0, 00:31:56.872 "rw_mbytes_per_sec": 0, 00:31:56.872 "r_mbytes_per_sec": 0, 00:31:56.872 "w_mbytes_per_sec": 0 00:31:56.872 }, 00:31:56.872 "claimed": true, 00:31:56.872 "claim_type": "read_many_write_one", 00:31:56.872 "zoned": false, 00:31:56.872 "supported_io_types": { 00:31:56.872 "read": true, 00:31:56.872 "write": true, 00:31:56.872 "unmap": true, 00:31:56.872 "flush": true, 00:31:56.872 "reset": true, 00:31:56.872 "nvme_admin": true, 00:31:56.872 "nvme_io": true, 00:31:56.872 "nvme_io_md": false, 00:31:56.872 "write_zeroes": true, 00:31:56.872 "zcopy": false, 00:31:56.872 "get_zone_info": false, 00:31:56.872 "zone_management": false, 00:31:56.872 "zone_append": false, 00:31:56.872 "compare": true, 00:31:56.872 "compare_and_write": false, 00:31:56.872 "abort": true, 00:31:56.872 "seek_hole": false, 00:31:56.872 "seek_data": false, 00:31:56.872 "copy": true, 00:31:56.872 "nvme_iov_md": false 00:31:56.872 }, 00:31:56.872 "driver_specific": { 00:31:56.872 "nvme": [ 00:31:56.872 { 00:31:56.872 "pci_address": "0000:00:11.0", 00:31:56.872 "trid": { 00:31:56.872 "trtype": "PCIe", 00:31:56.872 "traddr": "0000:00:11.0" 00:31:56.872 }, 00:31:56.872 "ctrlr_data": { 00:31:56.872 "cntlid": 0, 00:31:56.872 "vendor_id": "0x1b36", 00:31:56.872 "model_number": "QEMU NVMe Ctrl", 00:31:56.872 "serial_number": "12341", 00:31:56.872 "firmware_revision": "8.0.0", 00:31:56.872 "subnqn": "nqn.2019-08.org.qemu:12341", 00:31:56.872 "oacs": { 00:31:56.872 "security": 0, 00:31:56.872 "format": 1, 00:31:56.872 "firmware": 0, 00:31:56.872 "ns_manage": 1 00:31:56.872 }, 00:31:56.872 "multi_ctrlr": false, 00:31:56.872 "ana_reporting": false 00:31:56.872 }, 00:31:56.872 "vs": { 00:31:56.872 "nvme_version": "1.4" 00:31:56.872 }, 00:31:56.872 "ns_data": { 00:31:56.872 "id": 1, 00:31:56.872 "can_share": false 00:31:56.872 } 00:31:56.872 } 00:31:56.872 ], 00:31:56.872 "mp_policy": "active_passive" 00:31:56.872 } 00:31:56.872 } 00:31:56.872 ]' 00:31:56.872 06:58:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:57.132 06:58:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:31:57.132 06:58:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:57.132 06:58:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:31:57.132 06:58:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:31:57.132 06:58:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:31:57.132 06:58:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:31:57.132 06:58:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:31:57.132 06:58:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:31:57.133 06:58:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:57.133 06:58:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:31:57.133 06:58:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=2e324760-d999-4d09-b76e-8459b659faa3 00:31:57.133 06:58:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:31:57.133 06:58:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2e324760-d999-4d09-b76e-8459b659faa3 00:31:57.393 06:58:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:31:57.653 06:58:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=6020fe85-6c68-4145-806e-d2d5cc645f20 00:31:57.654 06:58:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 6020fe85-6c68-4145-806e-d2d5cc645f20 00:31:57.915 06:58:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=885030b4-931b-462d-b962-e27989624f53 00:31:57.915 06:58:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 885030b4-931b-462d-b962-e27989624f53 ]] 00:31:57.915 06:58:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 885030b4-931b-462d-b962-e27989624f53 5120 00:31:57.915 06:58:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:31:57.915 06:58:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:31:57.915 06:58:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=885030b4-931b-462d-b962-e27989624f53 00:31:57.915 06:58:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:31:57.915 06:58:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 885030b4-931b-462d-b962-e27989624f53 00:31:57.915 06:58:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=885030b4-931b-462d-b962-e27989624f53 00:31:57.915 06:58:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:57.915 06:58:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:31:57.915 06:58:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:31:57.915 06:58:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 885030b4-931b-462d-b962-e27989624f53 00:31:58.176 06:58:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:58.176 { 00:31:58.176 "name": "885030b4-931b-462d-b962-e27989624f53", 00:31:58.176 "aliases": [ 00:31:58.176 "lvs/basen1p0" 00:31:58.176 ], 00:31:58.176 "product_name": "Logical Volume", 00:31:58.176 "block_size": 4096, 00:31:58.176 "num_blocks": 5242880, 00:31:58.176 "uuid": "885030b4-931b-462d-b962-e27989624f53", 00:31:58.176 "assigned_rate_limits": { 00:31:58.176 "rw_ios_per_sec": 0, 00:31:58.176 "rw_mbytes_per_sec": 0, 00:31:58.176 "r_mbytes_per_sec": 0, 00:31:58.176 "w_mbytes_per_sec": 0 00:31:58.176 }, 00:31:58.176 "claimed": false, 00:31:58.176 "zoned": false, 00:31:58.176 "supported_io_types": { 00:31:58.176 "read": true, 00:31:58.176 "write": true, 00:31:58.176 "unmap": true, 00:31:58.176 "flush": false, 00:31:58.176 "reset": true, 00:31:58.176 "nvme_admin": false, 00:31:58.176 "nvme_io": false, 00:31:58.176 "nvme_io_md": false, 00:31:58.176 "write_zeroes": true, 00:31:58.176 "zcopy": false, 00:31:58.176 "get_zone_info": false, 00:31:58.176 "zone_management": false, 00:31:58.176 "zone_append": false, 00:31:58.176 "compare": false, 00:31:58.176 "compare_and_write": false, 00:31:58.176 "abort": false, 00:31:58.176 "seek_hole": true, 00:31:58.176 "seek_data": true, 00:31:58.176 "copy": false, 00:31:58.176 "nvme_iov_md": false 00:31:58.176 }, 00:31:58.176 "driver_specific": { 00:31:58.176 "lvol": { 00:31:58.176 "lvol_store_uuid": "6020fe85-6c68-4145-806e-d2d5cc645f20", 00:31:58.176 "base_bdev": "basen1", 00:31:58.176 "thin_provision": true, 00:31:58.176 "num_allocated_clusters": 0, 00:31:58.176 "snapshot": false, 00:31:58.176 "clone": false, 00:31:58.176 "esnap_clone": false 00:31:58.176 } 00:31:58.176 } 00:31:58.176 } 00:31:58.176 ]' 00:31:58.176 06:58:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:58.176 06:58:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:31:58.176 06:58:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:58.176 06:58:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:31:58.176 06:58:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:31:58.176 06:58:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:31:58.176 06:58:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:31:58.176 06:58:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:31:58.176 06:58:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:31:58.436 06:58:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:31:58.436 06:58:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:31:58.436 06:58:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:31:58.697 06:58:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:31:58.697 06:58:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:31:58.697 06:58:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 885030b4-931b-462d-b962-e27989624f53 -c cachen1p0 --l2p_dram_limit 2 00:31:58.697 [2024-12-06 06:58:11.371135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:58.697 [2024-12-06 06:58:11.371187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:31:58.697 [2024-12-06 06:58:11.371203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:58.697 [2024-12-06 06:58:11.371212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:58.697 [2024-12-06 06:58:11.371264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:58.697 [2024-12-06 06:58:11.371274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:58.697 [2024-12-06 06:58:11.371284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:31:58.697 [2024-12-06 06:58:11.371292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:58.697 [2024-12-06 06:58:11.371312] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:31:58.697 [2024-12-06 06:58:11.372108] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:31:58.697 [2024-12-06 06:58:11.372137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:58.697 [2024-12-06 06:58:11.372146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:58.697 [2024-12-06 06:58:11.372159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.827 ms 00:31:58.697 [2024-12-06 06:58:11.372166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:58.697 [2024-12-06 06:58:11.372227] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID c0a232f0-633a-42fa-842d-8eaa31e18778 00:31:58.697 [2024-12-06 06:58:11.373624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:58.697 [2024-12-06 06:58:11.373660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:31:58.697 [2024-12-06 06:58:11.373671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:31:58.697 [2024-12-06 06:58:11.373683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:58.697 [2024-12-06 06:58:11.380821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:58.697 [2024-12-06 06:58:11.380857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:58.697 [2024-12-06 06:58:11.380867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.091 ms 00:31:58.697 [2024-12-06 06:58:11.380876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:58.697 [2024-12-06 06:58:11.380915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:58.697 [2024-12-06 06:58:11.380926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:58.697 [2024-12-06 06:58:11.380935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:31:58.697 [2024-12-06 06:58:11.380946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:58.697 [2024-12-06 06:58:11.380978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:58.697 [2024-12-06 06:58:11.380990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:31:58.697 [2024-12-06 06:58:11.381000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:31:58.697 [2024-12-06 06:58:11.381009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:58.697 [2024-12-06 06:58:11.381029] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:31:58.697 [2024-12-06 06:58:11.384943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:58.697 [2024-12-06 06:58:11.384972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:58.697 [2024-12-06 06:58:11.384985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.916 ms 00:31:58.697 [2024-12-06 06:58:11.384993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:58.697 [2024-12-06 06:58:11.385024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:58.697 [2024-12-06 06:58:11.385032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:31:58.697 [2024-12-06 06:58:11.385043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:31:58.697 [2024-12-06 06:58:11.385050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:58.697 [2024-12-06 06:58:11.385092] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:31:58.697 [2024-12-06 06:58:11.385239] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:31:58.697 [2024-12-06 06:58:11.385256] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:31:58.697 [2024-12-06 06:58:11.385267] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:31:58.697 [2024-12-06 06:58:11.385279] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:31:58.697 [2024-12-06 06:58:11.385288] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:31:58.697 [2024-12-06 06:58:11.385298] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:31:58.697 [2024-12-06 06:58:11.385306] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:31:58.697 [2024-12-06 06:58:11.385318] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:31:58.697 [2024-12-06 06:58:11.385326] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:31:58.697 [2024-12-06 06:58:11.385335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:58.697 [2024-12-06 06:58:11.385342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:31:58.697 [2024-12-06 06:58:11.385352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.245 ms 00:31:58.697 [2024-12-06 06:58:11.385361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:58.697 [2024-12-06 06:58:11.385447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:58.697 [2024-12-06 06:58:11.385474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:31:58.697 [2024-12-06 06:58:11.385485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:31:58.697 [2024-12-06 06:58:11.385493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:58.697 [2024-12-06 06:58:11.385595] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:31:58.697 [2024-12-06 06:58:11.385607] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:31:58.697 [2024-12-06 06:58:11.385617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:58.697 [2024-12-06 06:58:11.385625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:58.697 [2024-12-06 06:58:11.385634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:31:58.697 [2024-12-06 06:58:11.385642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:31:58.697 [2024-12-06 06:58:11.385651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:31:58.697 [2024-12-06 06:58:11.385658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:31:58.697 [2024-12-06 06:58:11.385667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:31:58.697 [2024-12-06 06:58:11.385674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:58.698 [2024-12-06 06:58:11.385684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:31:58.698 [2024-12-06 06:58:11.385690] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:31:58.698 [2024-12-06 06:58:11.385699] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:58.698 [2024-12-06 06:58:11.385705] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:31:58.698 [2024-12-06 06:58:11.385714] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:31:58.698 [2024-12-06 06:58:11.385721] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:58.698 [2024-12-06 06:58:11.385731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:31:58.698 [2024-12-06 06:58:11.385738] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:31:58.698 [2024-12-06 06:58:11.385748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:58.698 [2024-12-06 06:58:11.385756] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:31:58.698 [2024-12-06 06:58:11.385764] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:31:58.698 [2024-12-06 06:58:11.385770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:58.698 [2024-12-06 06:58:11.385779] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:31:58.698 [2024-12-06 06:58:11.385785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:31:58.698 [2024-12-06 06:58:11.385794] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:58.698 [2024-12-06 06:58:11.385801] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:31:58.698 [2024-12-06 06:58:11.385809] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:31:58.698 [2024-12-06 06:58:11.385815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:58.698 [2024-12-06 06:58:11.385824] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:31:58.698 [2024-12-06 06:58:11.385831] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:31:58.698 [2024-12-06 06:58:11.385841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:58.698 [2024-12-06 06:58:11.385848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:31:58.698 [2024-12-06 06:58:11.385858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:31:58.698 [2024-12-06 06:58:11.385865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:58.698 [2024-12-06 06:58:11.385873] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:31:58.698 [2024-12-06 06:58:11.385879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:31:58.698 [2024-12-06 06:58:11.385889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:58.698 [2024-12-06 06:58:11.385896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:31:58.698 [2024-12-06 06:58:11.385905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:31:58.698 [2024-12-06 06:58:11.385911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:58.698 [2024-12-06 06:58:11.385920] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:31:58.698 [2024-12-06 06:58:11.385927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:31:58.698 [2024-12-06 06:58:11.385935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:58.698 [2024-12-06 06:58:11.385941] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:31:58.698 [2024-12-06 06:58:11.385950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:31:58.698 [2024-12-06 06:58:11.385958] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:58.698 [2024-12-06 06:58:11.385966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:58.698 [2024-12-06 06:58:11.385974] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:31:58.698 [2024-12-06 06:58:11.385985] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:31:58.698 [2024-12-06 06:58:11.385992] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:31:58.698 [2024-12-06 06:58:11.386000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:31:58.698 [2024-12-06 06:58:11.386006] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:31:58.698 [2024-12-06 06:58:11.386014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:31:58.698 [2024-12-06 06:58:11.386022] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:31:58.698 [2024-12-06 06:58:11.386036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:58.698 [2024-12-06 06:58:11.386044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:31:58.698 [2024-12-06 06:58:11.386053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:31:58.698 [2024-12-06 06:58:11.386061] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:31:58.698 [2024-12-06 06:58:11.386070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:31:58.698 [2024-12-06 06:58:11.386078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:31:58.698 [2024-12-06 06:58:11.386086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:31:58.698 [2024-12-06 06:58:11.386094] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:31:58.698 [2024-12-06 06:58:11.386105] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:31:58.698 [2024-12-06 06:58:11.386113] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:31:58.698 [2024-12-06 06:58:11.386125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:31:58.698 [2024-12-06 06:58:11.386132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:31:58.698 [2024-12-06 06:58:11.386141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:31:58.698 [2024-12-06 06:58:11.386148] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:31:58.698 [2024-12-06 06:58:11.386157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:31:58.698 [2024-12-06 06:58:11.386164] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:31:58.698 [2024-12-06 06:58:11.386174] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:58.698 [2024-12-06 06:58:11.386182] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:58.698 [2024-12-06 06:58:11.386191] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:31:58.698 [2024-12-06 06:58:11.386198] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:31:58.698 [2024-12-06 06:58:11.386207] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:31:58.698 [2024-12-06 06:58:11.386215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:58.698 [2024-12-06 06:58:11.386224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:31:58.698 [2024-12-06 06:58:11.386231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.690 ms 00:31:58.698 [2024-12-06 06:58:11.386239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:58.698 [2024-12-06 06:58:11.386276] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:31:58.698 [2024-12-06 06:58:11.386290] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:32:01.243 [2024-12-06 06:58:13.608984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.243 [2024-12-06 06:58:13.609050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:32:01.243 [2024-12-06 06:58:13.609065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2222.696 ms 00:32:01.243 [2024-12-06 06:58:13.609076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.243 [2024-12-06 06:58:13.637285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.243 [2024-12-06 06:58:13.637337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:32:01.243 [2024-12-06 06:58:13.637351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.993 ms 00:32:01.243 [2024-12-06 06:58:13.637363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.243 [2024-12-06 06:58:13.637438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.243 [2024-12-06 06:58:13.637452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:32:01.243 [2024-12-06 06:58:13.637476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:32:01.243 [2024-12-06 06:58:13.637493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.243 [2024-12-06 06:58:13.670392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.243 [2024-12-06 06:58:13.670432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:32:01.243 [2024-12-06 06:58:13.670444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.862 ms 00:32:01.243 [2024-12-06 06:58:13.670453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.243 [2024-12-06 06:58:13.670494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.243 [2024-12-06 06:58:13.670510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:32:01.243 [2024-12-06 06:58:13.670518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:32:01.243 [2024-12-06 06:58:13.670527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.243 [2024-12-06 06:58:13.670970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.243 [2024-12-06 06:58:13.670997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:32:01.243 [2024-12-06 06:58:13.671015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.388 ms 00:32:01.243 [2024-12-06 06:58:13.671025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.243 [2024-12-06 06:58:13.671064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.243 [2024-12-06 06:58:13.671074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:32:01.243 [2024-12-06 06:58:13.671085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:32:01.243 [2024-12-06 06:58:13.671096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.243 [2024-12-06 06:58:13.686643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.243 [2024-12-06 06:58:13.686840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:32:01.243 [2024-12-06 06:58:13.686859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.529 ms 00:32:01.243 [2024-12-06 06:58:13.686870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.243 [2024-12-06 06:58:13.711314] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:32:01.243 [2024-12-06 06:58:13.712555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.243 [2024-12-06 06:58:13.712593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:32:01.243 [2024-12-06 06:58:13.712611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.608 ms 00:32:01.243 [2024-12-06 06:58:13.712624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.243 [2024-12-06 06:58:13.744856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.243 [2024-12-06 06:58:13.745012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:32:01.243 [2024-12-06 06:58:13.745036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.185 ms 00:32:01.243 [2024-12-06 06:58:13.745045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.243 [2024-12-06 06:58:13.745133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.243 [2024-12-06 06:58:13.745146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:32:01.243 [2024-12-06 06:58:13.745159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:32:01.243 [2024-12-06 06:58:13.745168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.243 [2024-12-06 06:58:13.767985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.243 [2024-12-06 06:58:13.768103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:32:01.243 [2024-12-06 06:58:13.768122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.762 ms 00:32:01.243 [2024-12-06 06:58:13.768131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.243 [2024-12-06 06:58:13.790861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.243 [2024-12-06 06:58:13.790981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:32:01.243 [2024-12-06 06:58:13.791001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.693 ms 00:32:01.243 [2024-12-06 06:58:13.791009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.243 [2024-12-06 06:58:13.791625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.244 [2024-12-06 06:58:13.791640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:32:01.244 [2024-12-06 06:58:13.791652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.560 ms 00:32:01.244 [2024-12-06 06:58:13.791663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.244 [2024-12-06 06:58:13.862903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.244 [2024-12-06 06:58:13.862938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:32:01.244 [2024-12-06 06:58:13.862955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 71.205 ms 00:32:01.244 [2024-12-06 06:58:13.862963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.244 [2024-12-06 06:58:13.887173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.244 [2024-12-06 06:58:13.887208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:32:01.244 [2024-12-06 06:58:13.887222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.140 ms 00:32:01.244 [2024-12-06 06:58:13.887230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.244 [2024-12-06 06:58:13.909863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.244 [2024-12-06 06:58:13.909993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:32:01.244 [2024-12-06 06:58:13.910013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.593 ms 00:32:01.244 [2024-12-06 06:58:13.910020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.244 [2024-12-06 06:58:13.933169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.244 [2024-12-06 06:58:13.933286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:32:01.244 [2024-12-06 06:58:13.933305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.113 ms 00:32:01.244 [2024-12-06 06:58:13.933313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.244 [2024-12-06 06:58:13.933351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.244 [2024-12-06 06:58:13.933360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:32:01.244 [2024-12-06 06:58:13.933373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:32:01.244 [2024-12-06 06:58:13.933384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.244 [2024-12-06 06:58:13.933477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.244 [2024-12-06 06:58:13.933491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:32:01.244 [2024-12-06 06:58:13.933501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:32:01.244 [2024-12-06 06:58:13.933509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.244 [2024-12-06 06:58:13.934452] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2562.881 ms, result 0 00:32:01.244 { 00:32:01.244 "name": "ftl", 00:32:01.244 "uuid": "c0a232f0-633a-42fa-842d-8eaa31e18778" 00:32:01.244 } 00:32:01.244 06:58:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:32:01.505 [2024-12-06 06:58:14.141768] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:01.505 06:58:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:32:01.766 06:58:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:32:02.025 [2024-12-06 06:58:14.546149] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:32:02.025 06:58:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:32:02.025 [2024-12-06 06:58:14.738413] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:02.025 06:58:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:32:02.591 Fill FTL, iteration 1 00:32:02.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:32:02.591 06:58:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:32:02.591 06:58:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:32:02.591 06:58:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:32:02.591 06:58:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:32:02.591 06:58:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:32:02.591 06:58:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:32:02.591 06:58:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:32:02.591 06:58:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:32:02.591 06:58:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:32:02.591 06:58:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:32:02.591 06:58:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:32:02.591 06:58:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:32:02.591 06:58:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:02.591 06:58:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:02.591 06:58:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:02.591 06:58:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:32:02.591 06:58:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=80974 00:32:02.591 06:58:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:32:02.591 06:58:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 80974 /var/tmp/spdk.tgt.sock 00:32:02.591 06:58:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:32:02.591 06:58:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 80974 ']' 00:32:02.591 06:58:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:32:02.591 06:58:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:02.591 06:58:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:32:02.591 06:58:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:02.591 06:58:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:02.591 [2024-12-06 06:58:15.148324] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:32:02.591 [2024-12-06 06:58:15.148781] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80974 ] 00:32:02.591 [2024-12-06 06:58:15.306292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:02.848 [2024-12-06 06:58:15.402935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:03.414 06:58:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:03.414 06:58:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:32:03.414 06:58:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:32:03.672 ftln1 00:32:03.672 06:58:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:32:03.672 06:58:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:32:03.931 06:58:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:32:03.931 06:58:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 80974 00:32:03.931 06:58:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 80974 ']' 00:32:03.931 06:58:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 80974 00:32:03.931 06:58:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:32:03.931 06:58:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:03.931 06:58:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80974 00:32:03.931 killing process with pid 80974 00:32:03.931 06:58:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:03.931 06:58:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:03.931 06:58:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80974' 00:32:03.931 06:58:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 80974 00:32:03.931 06:58:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 80974 00:32:05.307 06:58:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:32:05.307 06:58:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:32:05.566 [2024-12-06 06:58:18.085711] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:32:05.566 [2024-12-06 06:58:18.085833] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81013 ] 00:32:05.566 [2024-12-06 06:58:18.243423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:05.825 [2024-12-06 06:58:18.348154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:07.203  [2024-12-06T06:58:20.877Z] Copying: 213/1024 [MB] (213 MBps) [2024-12-06T06:58:21.809Z] Copying: 452/1024 [MB] (239 MBps) [2024-12-06T06:58:22.743Z] Copying: 706/1024 [MB] (254 MBps) [2024-12-06T06:58:23.001Z] Copying: 959/1024 [MB] (253 MBps) [2024-12-06T06:58:23.936Z] Copying: 1024/1024 [MB] (average 240 MBps) 00:32:11.195 00:32:11.195 Calculate MD5 checksum, iteration 1 00:32:11.195 06:58:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:32:11.195 06:58:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:32:11.195 06:58:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:32:11.195 06:58:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:11.195 06:58:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:11.195 06:58:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:11.195 06:58:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:11.195 06:58:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:32:11.195 [2024-12-06 06:58:23.653060] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:32:11.195 [2024-12-06 06:58:23.653179] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81077 ] 00:32:11.195 [2024-12-06 06:58:23.808530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:11.195 [2024-12-06 06:58:23.899313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:12.576  [2024-12-06T06:58:25.882Z] Copying: 682/1024 [MB] (682 MBps) [2024-12-06T06:58:26.446Z] Copying: 1024/1024 [MB] (average 690 MBps) 00:32:13.705 00:32:13.705 06:58:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:32:13.705 06:58:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:16.237 06:58:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:32:16.237 Fill FTL, iteration 2 00:32:16.237 06:58:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=17e24ce6a152e80b3bca18c282613103 00:32:16.237 06:58:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:32:16.237 06:58:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:32:16.237 06:58:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:32:16.237 06:58:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:32:16.237 06:58:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:16.237 06:58:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:16.237 06:58:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:16.237 06:58:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:16.237 06:58:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:32:16.237 [2024-12-06 06:58:28.539785] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:32:16.237 [2024-12-06 06:58:28.539880] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81133 ] 00:32:16.237 [2024-12-06 06:58:28.694776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:16.237 [2024-12-06 06:58:28.800653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:17.618  [2024-12-06T06:58:31.292Z] Copying: 162/1024 [MB] (162 MBps) [2024-12-06T06:58:32.223Z] Copying: 387/1024 [MB] (225 MBps) [2024-12-06T06:58:33.214Z] Copying: 649/1024 [MB] (262 MBps) [2024-12-06T06:58:33.779Z] Copying: 908/1024 [MB] (259 MBps) [2024-12-06T06:58:34.347Z] Copying: 1024/1024 [MB] (average 228 MBps) 00:32:21.606 00:32:21.606 06:58:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:32:21.606 06:58:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:32:21.606 Calculate MD5 checksum, iteration 2 00:32:21.606 06:58:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:32:21.606 06:58:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:21.606 06:58:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:21.606 06:58:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:21.606 06:58:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:21.606 06:58:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:32:21.900 [2024-12-06 06:58:34.380105] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:32:21.900 [2024-12-06 06:58:34.380378] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81192 ] 00:32:21.900 [2024-12-06 06:58:34.533963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:21.900 [2024-12-06 06:58:34.618931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:23.343  [2024-12-06T06:58:37.029Z] Copying: 619/1024 [MB] (619 MBps) [2024-12-06T06:58:37.966Z] Copying: 1024/1024 [MB] (average 617 MBps) 00:32:25.225 00:32:25.225 06:58:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:32:25.225 06:58:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:27.138 06:58:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:32:27.138 06:58:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=b89216b95c7052cc71dbfa836b834713 00:32:27.138 06:58:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:32:27.138 06:58:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:32:27.138 06:58:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:32:27.396 [2024-12-06 06:58:39.979435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:27.396 [2024-12-06 06:58:39.979499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:32:27.396 [2024-12-06 06:58:39.979531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:32:27.396 [2024-12-06 06:58:39.979540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:27.396 [2024-12-06 06:58:39.979564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:27.396 [2024-12-06 06:58:39.979575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:32:27.396 [2024-12-06 06:58:39.979584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:32:27.396 [2024-12-06 06:58:39.979591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:27.396 [2024-12-06 06:58:39.979611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:27.396 [2024-12-06 06:58:39.979619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:32:27.396 [2024-12-06 06:58:39.979626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:32:27.396 [2024-12-06 06:58:39.979633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:27.396 [2024-12-06 06:58:39.979696] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.254 ms, result 0 00:32:27.396 true 00:32:27.396 06:58:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:32:27.655 { 00:32:27.655 "name": "ftl", 00:32:27.655 "properties": [ 00:32:27.655 { 00:32:27.655 "name": "superblock_version", 00:32:27.655 "value": 5, 00:32:27.655 "read-only": true 00:32:27.655 }, 00:32:27.655 { 00:32:27.655 "name": "base_device", 00:32:27.655 "bands": [ 00:32:27.655 { 00:32:27.655 "id": 0, 00:32:27.655 "state": "FREE", 00:32:27.655 "validity": 0.0 00:32:27.655 }, 00:32:27.655 { 00:32:27.655 "id": 1, 00:32:27.655 "state": "FREE", 00:32:27.655 "validity": 0.0 00:32:27.655 }, 00:32:27.655 { 00:32:27.655 "id": 2, 00:32:27.655 "state": "FREE", 00:32:27.655 "validity": 0.0 00:32:27.655 }, 00:32:27.655 { 00:32:27.655 "id": 3, 00:32:27.655 "state": "FREE", 00:32:27.655 "validity": 0.0 00:32:27.655 }, 00:32:27.655 { 00:32:27.655 "id": 4, 00:32:27.655 "state": "FREE", 00:32:27.655 "validity": 0.0 00:32:27.655 }, 00:32:27.655 { 00:32:27.655 "id": 5, 00:32:27.655 "state": "FREE", 00:32:27.655 "validity": 0.0 00:32:27.655 }, 00:32:27.655 { 00:32:27.655 "id": 6, 00:32:27.655 "state": "FREE", 00:32:27.655 "validity": 0.0 00:32:27.655 }, 00:32:27.655 { 00:32:27.655 "id": 7, 00:32:27.655 "state": "FREE", 00:32:27.655 "validity": 0.0 00:32:27.655 }, 00:32:27.655 { 00:32:27.655 "id": 8, 00:32:27.655 "state": "FREE", 00:32:27.655 "validity": 0.0 00:32:27.655 }, 00:32:27.655 { 00:32:27.655 "id": 9, 00:32:27.655 "state": "FREE", 00:32:27.655 "validity": 0.0 00:32:27.655 }, 00:32:27.655 { 00:32:27.655 "id": 10, 00:32:27.655 "state": "FREE", 00:32:27.655 "validity": 0.0 00:32:27.655 }, 00:32:27.655 { 00:32:27.655 "id": 11, 00:32:27.655 "state": "FREE", 00:32:27.655 "validity": 0.0 00:32:27.655 }, 00:32:27.655 { 00:32:27.655 "id": 12, 00:32:27.655 "state": "FREE", 00:32:27.655 "validity": 0.0 00:32:27.655 }, 00:32:27.655 { 00:32:27.655 "id": 13, 00:32:27.655 "state": "FREE", 00:32:27.655 "validity": 0.0 00:32:27.655 }, 00:32:27.655 { 00:32:27.655 "id": 14, 00:32:27.655 "state": "FREE", 00:32:27.655 "validity": 0.0 00:32:27.655 }, 00:32:27.655 { 00:32:27.655 "id": 15, 00:32:27.655 "state": "FREE", 00:32:27.655 "validity": 0.0 00:32:27.655 }, 00:32:27.655 { 00:32:27.655 "id": 16, 00:32:27.655 "state": "FREE", 00:32:27.655 "validity": 0.0 00:32:27.655 }, 00:32:27.655 { 00:32:27.655 "id": 17, 00:32:27.655 "state": "FREE", 00:32:27.655 "validity": 0.0 00:32:27.655 } 00:32:27.655 ], 00:32:27.655 "read-only": true 00:32:27.655 }, 00:32:27.655 { 00:32:27.655 "name": "cache_device", 00:32:27.655 "type": "bdev", 00:32:27.655 "chunks": [ 00:32:27.655 { 00:32:27.655 "id": 0, 00:32:27.655 "state": "INACTIVE", 00:32:27.655 "utilization": 0.0 00:32:27.655 }, 00:32:27.655 { 00:32:27.655 "id": 1, 00:32:27.655 "state": "CLOSED", 00:32:27.655 "utilization": 1.0 00:32:27.655 }, 00:32:27.655 { 00:32:27.655 "id": 2, 00:32:27.655 "state": "CLOSED", 00:32:27.655 "utilization": 1.0 00:32:27.655 }, 00:32:27.655 { 00:32:27.655 "id": 3, 00:32:27.655 "state": "OPEN", 00:32:27.655 "utilization": 0.001953125 00:32:27.655 }, 00:32:27.655 { 00:32:27.655 "id": 4, 00:32:27.655 "state": "OPEN", 00:32:27.656 "utilization": 0.0 00:32:27.656 } 00:32:27.656 ], 00:32:27.656 "read-only": true 00:32:27.656 }, 00:32:27.656 { 00:32:27.656 "name": "verbose_mode", 00:32:27.656 "value": true, 00:32:27.656 "unit": "", 00:32:27.656 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:32:27.656 }, 00:32:27.656 { 00:32:27.656 "name": "prep_upgrade_on_shutdown", 00:32:27.656 "value": false, 00:32:27.656 "unit": "", 00:32:27.656 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:32:27.656 } 00:32:27.656 ] 00:32:27.656 } 00:32:27.656 06:58:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:32:27.656 [2024-12-06 06:58:40.335847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:27.656 [2024-12-06 06:58:40.335891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:32:27.656 [2024-12-06 06:58:40.335904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:32:27.656 [2024-12-06 06:58:40.335912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:27.656 [2024-12-06 06:58:40.335934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:27.656 [2024-12-06 06:58:40.335942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:32:27.656 [2024-12-06 06:58:40.335949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:32:27.656 [2024-12-06 06:58:40.335956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:27.656 [2024-12-06 06:58:40.335974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:27.656 [2024-12-06 06:58:40.335982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:32:27.656 [2024-12-06 06:58:40.335989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:32:27.656 [2024-12-06 06:58:40.335996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:27.656 [2024-12-06 06:58:40.336047] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.190 ms, result 0 00:32:27.656 true 00:32:27.656 06:58:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:32:27.656 06:58:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:32:27.656 06:58:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:32:27.920 06:58:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:32:27.920 06:58:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:32:27.920 06:58:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:32:28.178 [2024-12-06 06:58:40.728303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:28.178 [2024-12-06 06:58:40.728345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:32:28.178 [2024-12-06 06:58:40.728356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:32:28.178 [2024-12-06 06:58:40.728363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:28.178 [2024-12-06 06:58:40.728384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:28.178 [2024-12-06 06:58:40.728392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:32:28.178 [2024-12-06 06:58:40.728399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:32:28.178 [2024-12-06 06:58:40.728406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:28.178 [2024-12-06 06:58:40.728424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:28.178 [2024-12-06 06:58:40.728432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:32:28.178 [2024-12-06 06:58:40.728439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:32:28.178 [2024-12-06 06:58:40.728445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:28.178 [2024-12-06 06:58:40.728512] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.195 ms, result 0 00:32:28.178 true 00:32:28.178 06:58:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:32:28.436 { 00:32:28.436 "name": "ftl", 00:32:28.436 "properties": [ 00:32:28.436 { 00:32:28.436 "name": "superblock_version", 00:32:28.436 "value": 5, 00:32:28.436 "read-only": true 00:32:28.436 }, 00:32:28.436 { 00:32:28.436 "name": "base_device", 00:32:28.436 "bands": [ 00:32:28.436 { 00:32:28.436 "id": 0, 00:32:28.436 "state": "FREE", 00:32:28.436 "validity": 0.0 00:32:28.436 }, 00:32:28.436 { 00:32:28.436 "id": 1, 00:32:28.436 "state": "FREE", 00:32:28.436 "validity": 0.0 00:32:28.436 }, 00:32:28.436 { 00:32:28.436 "id": 2, 00:32:28.436 "state": "FREE", 00:32:28.436 "validity": 0.0 00:32:28.436 }, 00:32:28.436 { 00:32:28.436 "id": 3, 00:32:28.436 "state": "FREE", 00:32:28.436 "validity": 0.0 00:32:28.436 }, 00:32:28.436 { 00:32:28.436 "id": 4, 00:32:28.436 "state": "FREE", 00:32:28.436 "validity": 0.0 00:32:28.436 }, 00:32:28.436 { 00:32:28.436 "id": 5, 00:32:28.436 "state": "FREE", 00:32:28.436 "validity": 0.0 00:32:28.436 }, 00:32:28.436 { 00:32:28.436 "id": 6, 00:32:28.436 "state": "FREE", 00:32:28.436 "validity": 0.0 00:32:28.436 }, 00:32:28.436 { 00:32:28.436 "id": 7, 00:32:28.436 "state": "FREE", 00:32:28.436 "validity": 0.0 00:32:28.436 }, 00:32:28.436 { 00:32:28.436 "id": 8, 00:32:28.436 "state": "FREE", 00:32:28.436 "validity": 0.0 00:32:28.436 }, 00:32:28.436 { 00:32:28.436 "id": 9, 00:32:28.436 "state": "FREE", 00:32:28.436 "validity": 0.0 00:32:28.436 }, 00:32:28.436 { 00:32:28.436 "id": 10, 00:32:28.436 "state": "FREE", 00:32:28.436 "validity": 0.0 00:32:28.436 }, 00:32:28.436 { 00:32:28.436 "id": 11, 00:32:28.436 "state": "FREE", 00:32:28.436 "validity": 0.0 00:32:28.436 }, 00:32:28.436 { 00:32:28.436 "id": 12, 00:32:28.436 "state": "FREE", 00:32:28.436 "validity": 0.0 00:32:28.436 }, 00:32:28.436 { 00:32:28.436 "id": 13, 00:32:28.436 "state": "FREE", 00:32:28.436 "validity": 0.0 00:32:28.436 }, 00:32:28.436 { 00:32:28.436 "id": 14, 00:32:28.436 "state": "FREE", 00:32:28.436 "validity": 0.0 00:32:28.436 }, 00:32:28.436 { 00:32:28.436 "id": 15, 00:32:28.436 "state": "FREE", 00:32:28.436 "validity": 0.0 00:32:28.436 }, 00:32:28.436 { 00:32:28.436 "id": 16, 00:32:28.436 "state": "FREE", 00:32:28.436 "validity": 0.0 00:32:28.436 }, 00:32:28.436 { 00:32:28.436 "id": 17, 00:32:28.436 "state": "FREE", 00:32:28.436 "validity": 0.0 00:32:28.436 } 00:32:28.436 ], 00:32:28.436 "read-only": true 00:32:28.436 }, 00:32:28.436 { 00:32:28.436 "name": "cache_device", 00:32:28.436 "type": "bdev", 00:32:28.436 "chunks": [ 00:32:28.436 { 00:32:28.436 "id": 0, 00:32:28.436 "state": "INACTIVE", 00:32:28.436 "utilization": 0.0 00:32:28.436 }, 00:32:28.436 { 00:32:28.436 "id": 1, 00:32:28.436 "state": "CLOSED", 00:32:28.436 "utilization": 1.0 00:32:28.436 }, 00:32:28.436 { 00:32:28.436 "id": 2, 00:32:28.436 "state": "CLOSED", 00:32:28.436 "utilization": 1.0 00:32:28.436 }, 00:32:28.436 { 00:32:28.436 "id": 3, 00:32:28.436 "state": "OPEN", 00:32:28.436 "utilization": 0.001953125 00:32:28.436 }, 00:32:28.436 { 00:32:28.436 "id": 4, 00:32:28.436 "state": "OPEN", 00:32:28.436 "utilization": 0.0 00:32:28.436 } 00:32:28.436 ], 00:32:28.436 "read-only": true 00:32:28.436 }, 00:32:28.436 { 00:32:28.436 "name": "verbose_mode", 00:32:28.436 "value": true, 00:32:28.436 "unit": "", 00:32:28.436 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:32:28.436 }, 00:32:28.436 { 00:32:28.436 "name": "prep_upgrade_on_shutdown", 00:32:28.436 "value": true, 00:32:28.436 "unit": "", 00:32:28.436 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:32:28.436 } 00:32:28.436 ] 00:32:28.436 } 00:32:28.436 06:58:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:32:28.436 06:58:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 80863 ]] 00:32:28.436 06:58:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 80863 00:32:28.436 06:58:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 80863 ']' 00:32:28.436 06:58:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 80863 00:32:28.436 06:58:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:32:28.436 06:58:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:28.436 06:58:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80863 00:32:28.436 killing process with pid 80863 00:32:28.436 06:58:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:28.436 06:58:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:28.436 06:58:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80863' 00:32:28.436 06:58:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 80863 00:32:28.436 06:58:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 80863 00:32:29.002 [2024-12-06 06:58:41.653792] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:32:29.002 [2024-12-06 06:58:41.665852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:29.002 [2024-12-06 06:58:41.666005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:32:29.002 [2024-12-06 06:58:41.666024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:32:29.002 [2024-12-06 06:58:41.666032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:29.002 [2024-12-06 06:58:41.666056] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:32:29.002 [2024-12-06 06:58:41.668636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:29.002 [2024-12-06 06:58:41.668661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:32:29.002 [2024-12-06 06:58:41.668671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.567 ms 00:32:29.002 [2024-12-06 06:58:41.668684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:38.974 [2024-12-06 06:58:50.953236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:38.974 [2024-12-06 06:58:50.953414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:32:38.974 [2024-12-06 06:58:50.953441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9284.496 ms 00:32:38.974 [2024-12-06 06:58:50.953450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:38.974 [2024-12-06 06:58:50.955335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:38.974 [2024-12-06 06:58:50.955367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:32:38.974 [2024-12-06 06:58:50.955378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.851 ms 00:32:38.974 [2024-12-06 06:58:50.955387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:38.974 [2024-12-06 06:58:50.956556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:38.974 [2024-12-06 06:58:50.956584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:32:38.974 [2024-12-06 06:58:50.956594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.140 ms 00:32:38.974 [2024-12-06 06:58:50.956607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:38.974 [2024-12-06 06:58:50.966066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:38.974 [2024-12-06 06:58:50.966100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:32:38.974 [2024-12-06 06:58:50.966112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.422 ms 00:32:38.974 [2024-12-06 06:58:50.966120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:38.974 [2024-12-06 06:58:50.972913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:38.975 [2024-12-06 06:58:50.972948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:32:38.975 [2024-12-06 06:58:50.972958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.760 ms 00:32:38.975 [2024-12-06 06:58:50.972967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:38.975 [2024-12-06 06:58:50.973047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:38.975 [2024-12-06 06:58:50.973061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:32:38.975 [2024-12-06 06:58:50.973070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.046 ms 00:32:38.975 [2024-12-06 06:58:50.973077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:38.975 [2024-12-06 06:58:50.982774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:38.975 [2024-12-06 06:58:50.982806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:32:38.975 [2024-12-06 06:58:50.982816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.682 ms 00:32:38.975 [2024-12-06 06:58:50.982823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:38.975 [2024-12-06 06:58:50.992293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:38.975 [2024-12-06 06:58:50.992427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:32:38.975 [2024-12-06 06:58:50.992442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.439 ms 00:32:38.975 [2024-12-06 06:58:50.992448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:38.975 [2024-12-06 06:58:51.002273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:38.975 [2024-12-06 06:58:51.002305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:32:38.975 [2024-12-06 06:58:51.002315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.782 ms 00:32:38.975 [2024-12-06 06:58:51.002323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:38.975 [2024-12-06 06:58:51.011516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:38.975 [2024-12-06 06:58:51.011547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:32:38.975 [2024-12-06 06:58:51.011555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.133 ms 00:32:38.975 [2024-12-06 06:58:51.011562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:38.975 [2024-12-06 06:58:51.011592] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:32:38.975 [2024-12-06 06:58:51.011614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:32:38.975 [2024-12-06 06:58:51.011624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:32:38.975 [2024-12-06 06:58:51.011631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:32:38.975 [2024-12-06 06:58:51.011639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:38.975 [2024-12-06 06:58:51.011646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:38.975 [2024-12-06 06:58:51.011654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:38.975 [2024-12-06 06:58:51.011661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:38.975 [2024-12-06 06:58:51.011668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:38.975 [2024-12-06 06:58:51.011676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:38.975 [2024-12-06 06:58:51.011683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:38.975 [2024-12-06 06:58:51.011690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:38.975 [2024-12-06 06:58:51.011697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:38.975 [2024-12-06 06:58:51.011704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:38.975 [2024-12-06 06:58:51.011712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:38.975 [2024-12-06 06:58:51.011719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:38.975 [2024-12-06 06:58:51.011726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:38.975 [2024-12-06 06:58:51.011734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:38.975 [2024-12-06 06:58:51.011741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:38.975 [2024-12-06 06:58:51.011750] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:32:38.975 [2024-12-06 06:58:51.011758] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: c0a232f0-633a-42fa-842d-8eaa31e18778 00:32:38.975 [2024-12-06 06:58:51.011765] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:32:38.975 [2024-12-06 06:58:51.011772] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:32:38.975 [2024-12-06 06:58:51.011779] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:32:38.975 [2024-12-06 06:58:51.011787] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:32:38.975 [2024-12-06 06:58:51.011796] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:32:38.975 [2024-12-06 06:58:51.011804] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:32:38.975 [2024-12-06 06:58:51.011813] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:32:38.975 [2024-12-06 06:58:51.011820] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:32:38.975 [2024-12-06 06:58:51.011826] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:32:38.975 [2024-12-06 06:58:51.011834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:38.975 [2024-12-06 06:58:51.011842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:32:38.975 [2024-12-06 06:58:51.011850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.243 ms 00:32:38.975 [2024-12-06 06:58:51.011857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:38.975 [2024-12-06 06:58:51.024445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:38.975 [2024-12-06 06:58:51.024506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:32:38.975 [2024-12-06 06:58:51.024522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.573 ms 00:32:38.975 [2024-12-06 06:58:51.024530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:38.975 [2024-12-06 06:58:51.024867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:38.975 [2024-12-06 06:58:51.024881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:32:38.975 [2024-12-06 06:58:51.024889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.317 ms 00:32:38.975 [2024-12-06 06:58:51.024896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:38.975 [2024-12-06 06:58:51.066396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:38.975 [2024-12-06 06:58:51.066438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:32:38.975 [2024-12-06 06:58:51.066448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:38.975 [2024-12-06 06:58:51.066457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:38.975 [2024-12-06 06:58:51.066510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:38.975 [2024-12-06 06:58:51.066519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:32:38.975 [2024-12-06 06:58:51.066526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:38.975 [2024-12-06 06:58:51.066534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:38.975 [2024-12-06 06:58:51.066597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:38.975 [2024-12-06 06:58:51.066606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:32:38.975 [2024-12-06 06:58:51.066618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:38.975 [2024-12-06 06:58:51.066645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:38.975 [2024-12-06 06:58:51.066661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:38.975 [2024-12-06 06:58:51.066669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:32:38.975 [2024-12-06 06:58:51.066676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:38.975 [2024-12-06 06:58:51.066683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:38.975 [2024-12-06 06:58:51.144493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:38.975 [2024-12-06 06:58:51.144542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:32:38.975 [2024-12-06 06:58:51.144559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:38.975 [2024-12-06 06:58:51.144568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:38.975 [2024-12-06 06:58:51.208016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:38.975 [2024-12-06 06:58:51.208058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:32:38.975 [2024-12-06 06:58:51.208070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:38.975 [2024-12-06 06:58:51.208079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:38.975 [2024-12-06 06:58:51.208156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:38.975 [2024-12-06 06:58:51.208166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:32:38.975 [2024-12-06 06:58:51.208175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:38.975 [2024-12-06 06:58:51.208188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:38.975 [2024-12-06 06:58:51.208228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:38.975 [2024-12-06 06:58:51.208237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:32:38.975 [2024-12-06 06:58:51.208245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:38.975 [2024-12-06 06:58:51.208252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:38.975 [2024-12-06 06:58:51.208335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:38.975 [2024-12-06 06:58:51.208345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:32:38.975 [2024-12-06 06:58:51.208353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:38.975 [2024-12-06 06:58:51.208360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:38.975 [2024-12-06 06:58:51.208393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:38.975 [2024-12-06 06:58:51.208402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:32:38.975 [2024-12-06 06:58:51.208410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:38.975 [2024-12-06 06:58:51.208417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:38.975 [2024-12-06 06:58:51.208451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:38.976 [2024-12-06 06:58:51.208460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:32:38.976 [2024-12-06 06:58:51.208490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:38.976 [2024-12-06 06:58:51.208497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:38.976 [2024-12-06 06:58:51.208541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:38.976 [2024-12-06 06:58:51.208551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:32:38.976 [2024-12-06 06:58:51.208558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:38.976 [2024-12-06 06:58:51.208565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:38.976 [2024-12-06 06:58:51.208678] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 9542.769 ms, result 0 00:32:45.529 06:58:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:32:45.529 06:58:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:32:45.529 06:58:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:32:45.529 06:58:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:32:45.529 06:58:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:32:45.529 06:58:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=81402 00:32:45.529 06:58:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:32:45.529 06:58:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 81402 00:32:45.529 06:58:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 81402 ']' 00:32:45.529 06:58:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:45.529 06:58:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:45.529 06:58:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:45.529 06:58:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:45.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:45.529 06:58:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:45.529 06:58:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:45.529 [2024-12-06 06:58:58.146094] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:32:45.529 [2024-12-06 06:58:58.146221] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81402 ] 00:32:45.786 [2024-12-06 06:58:58.307026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:45.786 [2024-12-06 06:58:58.419503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:46.729 [2024-12-06 06:58:59.200349] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:32:46.729 [2024-12-06 06:58:59.200481] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:32:46.729 [2024-12-06 06:58:59.354958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:46.729 [2024-12-06 06:58:59.355020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:32:46.729 [2024-12-06 06:58:59.355038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:32:46.729 [2024-12-06 06:58:59.355048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:46.729 [2024-12-06 06:58:59.355121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:46.729 [2024-12-06 06:58:59.355132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:32:46.729 [2024-12-06 06:58:59.355142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:32:46.729 [2024-12-06 06:58:59.355151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:46.729 [2024-12-06 06:58:59.355180] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:32:46.729 [2024-12-06 06:58:59.356166] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:32:46.729 [2024-12-06 06:58:59.356222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:46.729 [2024-12-06 06:58:59.356233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:32:46.729 [2024-12-06 06:58:59.356244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.052 ms 00:32:46.729 [2024-12-06 06:58:59.356252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:46.729 [2024-12-06 06:58:59.358567] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:32:46.729 [2024-12-06 06:58:59.373721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:46.729 [2024-12-06 06:58:59.373944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:32:46.729 [2024-12-06 06:58:59.373976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.156 ms 00:32:46.729 [2024-12-06 06:58:59.373985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:46.729 [2024-12-06 06:58:59.374146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:46.729 [2024-12-06 06:58:59.374164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:32:46.729 [2024-12-06 06:58:59.374174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:32:46.729 [2024-12-06 06:58:59.374182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:46.729 [2024-12-06 06:58:59.385797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:46.729 [2024-12-06 06:58:59.385840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:32:46.729 [2024-12-06 06:58:59.385852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.519 ms 00:32:46.729 [2024-12-06 06:58:59.385861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:46.729 [2024-12-06 06:58:59.385937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:46.729 [2024-12-06 06:58:59.385947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:32:46.729 [2024-12-06 06:58:59.385957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:32:46.729 [2024-12-06 06:58:59.385965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:46.729 [2024-12-06 06:58:59.386028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:46.729 [2024-12-06 06:58:59.386045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:32:46.729 [2024-12-06 06:58:59.386055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:32:46.729 [2024-12-06 06:58:59.386063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:46.729 [2024-12-06 06:58:59.386091] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:32:46.729 [2024-12-06 06:58:59.390775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:46.729 [2024-12-06 06:58:59.390817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:32:46.729 [2024-12-06 06:58:59.390828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.691 ms 00:32:46.729 [2024-12-06 06:58:59.390841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:46.729 [2024-12-06 06:58:59.390879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:46.729 [2024-12-06 06:58:59.390890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:32:46.729 [2024-12-06 06:58:59.390898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:32:46.729 [2024-12-06 06:58:59.390907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:46.729 [2024-12-06 06:58:59.390950] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:32:46.729 [2024-12-06 06:58:59.390983] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:32:46.729 [2024-12-06 06:58:59.391023] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:32:46.729 [2024-12-06 06:58:59.391042] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:32:46.729 [2024-12-06 06:58:59.391154] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:32:46.729 [2024-12-06 06:58:59.391167] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:32:46.729 [2024-12-06 06:58:59.391179] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:32:46.729 [2024-12-06 06:58:59.391189] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:32:46.729 [2024-12-06 06:58:59.391200] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:32:46.729 [2024-12-06 06:58:59.391212] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:32:46.729 [2024-12-06 06:58:59.391220] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:32:46.729 [2024-12-06 06:58:59.391229] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:32:46.729 [2024-12-06 06:58:59.391239] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:32:46.729 [2024-12-06 06:58:59.391249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:46.729 [2024-12-06 06:58:59.391257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:32:46.729 [2024-12-06 06:58:59.391265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.304 ms 00:32:46.729 [2024-12-06 06:58:59.391273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:46.729 [2024-12-06 06:58:59.391359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:46.729 [2024-12-06 06:58:59.391369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:32:46.729 [2024-12-06 06:58:59.391381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.070 ms 00:32:46.729 [2024-12-06 06:58:59.391389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:46.729 [2024-12-06 06:58:59.391559] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:32:46.729 [2024-12-06 06:58:59.391575] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:32:46.729 [2024-12-06 06:58:59.391584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:32:46.729 [2024-12-06 06:58:59.391592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:46.729 [2024-12-06 06:58:59.391603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:32:46.729 [2024-12-06 06:58:59.391611] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:32:46.729 [2024-12-06 06:58:59.391618] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:32:46.729 [2024-12-06 06:58:59.391626] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:32:46.729 [2024-12-06 06:58:59.391636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:32:46.729 [2024-12-06 06:58:59.391643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:46.729 [2024-12-06 06:58:59.391651] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:32:46.729 [2024-12-06 06:58:59.391662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:32:46.729 [2024-12-06 06:58:59.391670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:46.729 [2024-12-06 06:58:59.391679] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:32:46.729 [2024-12-06 06:58:59.391687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:32:46.729 [2024-12-06 06:58:59.391694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:46.729 [2024-12-06 06:58:59.391702] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:32:46.729 [2024-12-06 06:58:59.391709] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:32:46.729 [2024-12-06 06:58:59.391728] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:46.729 [2024-12-06 06:58:59.391736] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:32:46.729 [2024-12-06 06:58:59.391744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:32:46.729 [2024-12-06 06:58:59.391751] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:46.729 [2024-12-06 06:58:59.391758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:32:46.729 [2024-12-06 06:58:59.391773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:32:46.729 [2024-12-06 06:58:59.391780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:46.729 [2024-12-06 06:58:59.391787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:32:46.729 [2024-12-06 06:58:59.391795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:32:46.729 [2024-12-06 06:58:59.391802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:46.729 [2024-12-06 06:58:59.391810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:32:46.729 [2024-12-06 06:58:59.391817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:32:46.729 [2024-12-06 06:58:59.391824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:46.729 [2024-12-06 06:58:59.391830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:32:46.729 [2024-12-06 06:58:59.391837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:32:46.729 [2024-12-06 06:58:59.391844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:46.729 [2024-12-06 06:58:59.391851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:32:46.729 [2024-12-06 06:58:59.391857] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:32:46.729 [2024-12-06 06:58:59.391865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:46.729 [2024-12-06 06:58:59.391872] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:32:46.729 [2024-12-06 06:58:59.391879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:32:46.729 [2024-12-06 06:58:59.391886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:46.729 [2024-12-06 06:58:59.391892] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:32:46.729 [2024-12-06 06:58:59.391899] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:32:46.729 [2024-12-06 06:58:59.391906] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:46.729 [2024-12-06 06:58:59.391916] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:32:46.729 [2024-12-06 06:58:59.391925] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:32:46.729 [2024-12-06 06:58:59.391933] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:32:46.729 [2024-12-06 06:58:59.391941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:46.729 [2024-12-06 06:58:59.391954] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:32:46.729 [2024-12-06 06:58:59.391961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:32:46.729 [2024-12-06 06:58:59.391968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:32:46.729 [2024-12-06 06:58:59.391975] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:32:46.729 [2024-12-06 06:58:59.391982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:32:46.729 [2024-12-06 06:58:59.391989] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:32:46.729 [2024-12-06 06:58:59.391998] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:32:46.729 [2024-12-06 06:58:59.392009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:46.729 [2024-12-06 06:58:59.392017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:32:46.729 [2024-12-06 06:58:59.392025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:32:46.729 [2024-12-06 06:58:59.392033] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:32:46.729 [2024-12-06 06:58:59.392040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:32:46.729 [2024-12-06 06:58:59.392047] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:32:46.729 [2024-12-06 06:58:59.392054] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:32:46.729 [2024-12-06 06:58:59.392063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:32:46.729 [2024-12-06 06:58:59.392074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:32:46.729 [2024-12-06 06:58:59.392082] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:32:46.729 [2024-12-06 06:58:59.392089] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:32:46.729 [2024-12-06 06:58:59.392098] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:32:46.730 [2024-12-06 06:58:59.392107] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:32:46.730 [2024-12-06 06:58:59.392114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:32:46.730 [2024-12-06 06:58:59.392123] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:32:46.730 [2024-12-06 06:58:59.392130] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:32:46.730 [2024-12-06 06:58:59.392139] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:46.730 [2024-12-06 06:58:59.392147] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:46.730 [2024-12-06 06:58:59.392154] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:32:46.730 [2024-12-06 06:58:59.392164] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:32:46.730 [2024-12-06 06:58:59.392172] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:32:46.730 [2024-12-06 06:58:59.392182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:46.730 [2024-12-06 06:58:59.392191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:32:46.730 [2024-12-06 06:58:59.392199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.729 ms 00:32:46.730 [2024-12-06 06:58:59.392207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:46.730 [2024-12-06 06:58:59.392252] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:32:46.730 [2024-12-06 06:58:59.392265] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:32:50.920 [2024-12-06 06:59:03.149397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:50.920 [2024-12-06 06:59:03.149518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:32:50.920 [2024-12-06 06:59:03.149540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3757.126 ms 00:32:50.920 [2024-12-06 06:59:03.149550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:50.920 [2024-12-06 06:59:03.186808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:50.920 [2024-12-06 06:59:03.187069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:32:50.920 [2024-12-06 06:59:03.187093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.989 ms 00:32:50.920 [2024-12-06 06:59:03.187104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:50.920 [2024-12-06 06:59:03.187244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:50.920 [2024-12-06 06:59:03.187263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:32:50.920 [2024-12-06 06:59:03.187275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:32:50.920 [2024-12-06 06:59:03.187285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:50.920 [2024-12-06 06:59:03.227023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:50.920 [2024-12-06 06:59:03.227076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:32:50.920 [2024-12-06 06:59:03.227093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 39.690 ms 00:32:50.920 [2024-12-06 06:59:03.227102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:50.920 [2024-12-06 06:59:03.227160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:50.920 [2024-12-06 06:59:03.227171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:32:50.920 [2024-12-06 06:59:03.227180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:32:50.920 [2024-12-06 06:59:03.227189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:50.920 [2024-12-06 06:59:03.227966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:50.920 [2024-12-06 06:59:03.227995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:32:50.920 [2024-12-06 06:59:03.228009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.717 ms 00:32:50.920 [2024-12-06 06:59:03.228019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:50.920 [2024-12-06 06:59:03.228087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:50.920 [2024-12-06 06:59:03.228109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:32:50.920 [2024-12-06 06:59:03.228120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:32:50.920 [2024-12-06 06:59:03.228131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:50.920 [2024-12-06 06:59:03.248640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:50.920 [2024-12-06 06:59:03.248687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:32:50.920 [2024-12-06 06:59:03.248700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.481 ms 00:32:50.920 [2024-12-06 06:59:03.248710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:50.920 [2024-12-06 06:59:03.279745] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:32:50.920 [2024-12-06 06:59:03.279813] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:32:50.920 [2024-12-06 06:59:03.279835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:50.920 [2024-12-06 06:59:03.279848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:32:50.920 [2024-12-06 06:59:03.279863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.993 ms 00:32:50.920 [2024-12-06 06:59:03.279873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:50.920 [2024-12-06 06:59:03.294535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:50.920 [2024-12-06 06:59:03.294586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:32:50.920 [2024-12-06 06:59:03.294599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.569 ms 00:32:50.920 [2024-12-06 06:59:03.294608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:50.920 [2024-12-06 06:59:03.306934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:50.920 [2024-12-06 06:59:03.306999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:32:50.920 [2024-12-06 06:59:03.307011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.268 ms 00:32:50.920 [2024-12-06 06:59:03.307019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:50.920 [2024-12-06 06:59:03.319364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:50.920 [2024-12-06 06:59:03.319406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:32:50.920 [2024-12-06 06:59:03.319418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.294 ms 00:32:50.920 [2024-12-06 06:59:03.319435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:50.920 [2024-12-06 06:59:03.320132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:50.920 [2024-12-06 06:59:03.320169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:32:50.920 [2024-12-06 06:59:03.320180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.558 ms 00:32:50.920 [2024-12-06 06:59:03.320190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:50.920 [2024-12-06 06:59:03.391981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:50.920 [2024-12-06 06:59:03.392052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:32:50.920 [2024-12-06 06:59:03.392073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 71.765 ms 00:32:50.920 [2024-12-06 06:59:03.392083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:50.920 [2024-12-06 06:59:03.404253] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:32:50.920 [2024-12-06 06:59:03.405517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:50.920 [2024-12-06 06:59:03.405709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:32:50.920 [2024-12-06 06:59:03.405732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.365 ms 00:32:50.920 [2024-12-06 06:59:03.405743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:50.920 [2024-12-06 06:59:03.405878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:50.920 [2024-12-06 06:59:03.405893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:32:50.920 [2024-12-06 06:59:03.405907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:32:50.920 [2024-12-06 06:59:03.405917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:50.920 [2024-12-06 06:59:03.405993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:50.920 [2024-12-06 06:59:03.406005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:32:50.920 [2024-12-06 06:59:03.406016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:32:50.920 [2024-12-06 06:59:03.406026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:50.920 [2024-12-06 06:59:03.406059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:50.920 [2024-12-06 06:59:03.406069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:32:50.920 [2024-12-06 06:59:03.406083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:32:50.920 [2024-12-06 06:59:03.406092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:50.920 [2024-12-06 06:59:03.406136] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:32:50.920 [2024-12-06 06:59:03.406148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:50.920 [2024-12-06 06:59:03.406159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:32:50.920 [2024-12-06 06:59:03.406168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:32:50.920 [2024-12-06 06:59:03.406177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:50.920 [2024-12-06 06:59:03.431515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:50.920 [2024-12-06 06:59:03.431573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:32:50.920 [2024-12-06 06:59:03.431588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.311 ms 00:32:50.920 [2024-12-06 06:59:03.431597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:50.920 [2024-12-06 06:59:03.431692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:50.920 [2024-12-06 06:59:03.431704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:32:50.920 [2024-12-06 06:59:03.431715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:32:50.920 [2024-12-06 06:59:03.431724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:50.920 [2024-12-06 06:59:03.433235] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4077.682 ms, result 0 00:32:50.920 [2024-12-06 06:59:03.447970] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:50.920 [2024-12-06 06:59:03.463973] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:32:50.920 [2024-12-06 06:59:03.472188] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:51.492 06:59:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:51.492 06:59:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:32:51.492 06:59:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:32:51.492 06:59:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:32:51.492 06:59:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:32:51.751 [2024-12-06 06:59:04.276699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:51.751 [2024-12-06 06:59:04.276755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:32:51.751 [2024-12-06 06:59:04.276775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:32:51.751 [2024-12-06 06:59:04.276785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:51.751 [2024-12-06 06:59:04.276809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:51.751 [2024-12-06 06:59:04.276819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:32:51.751 [2024-12-06 06:59:04.276828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:32:51.751 [2024-12-06 06:59:04.276836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:51.751 [2024-12-06 06:59:04.276857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:51.751 [2024-12-06 06:59:04.276866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:32:51.751 [2024-12-06 06:59:04.276873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:32:51.751 [2024-12-06 06:59:04.276882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:51.751 [2024-12-06 06:59:04.276946] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.239 ms, result 0 00:32:51.751 true 00:32:51.751 06:59:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:32:52.013 { 00:32:52.013 "name": "ftl", 00:32:52.013 "properties": [ 00:32:52.013 { 00:32:52.013 "name": "superblock_version", 00:32:52.013 "value": 5, 00:32:52.013 "read-only": true 00:32:52.013 }, 00:32:52.013 { 00:32:52.013 "name": "base_device", 00:32:52.013 "bands": [ 00:32:52.013 { 00:32:52.013 "id": 0, 00:32:52.013 "state": "CLOSED", 00:32:52.013 "validity": 1.0 00:32:52.013 }, 00:32:52.013 { 00:32:52.013 "id": 1, 00:32:52.013 "state": "CLOSED", 00:32:52.013 "validity": 1.0 00:32:52.013 }, 00:32:52.013 { 00:32:52.013 "id": 2, 00:32:52.013 "state": "CLOSED", 00:32:52.013 "validity": 0.007843137254901933 00:32:52.013 }, 00:32:52.013 { 00:32:52.013 "id": 3, 00:32:52.013 "state": "FREE", 00:32:52.013 "validity": 0.0 00:32:52.013 }, 00:32:52.013 { 00:32:52.013 "id": 4, 00:32:52.013 "state": "FREE", 00:32:52.013 "validity": 0.0 00:32:52.013 }, 00:32:52.013 { 00:32:52.013 "id": 5, 00:32:52.013 "state": "FREE", 00:32:52.013 "validity": 0.0 00:32:52.013 }, 00:32:52.013 { 00:32:52.013 "id": 6, 00:32:52.013 "state": "FREE", 00:32:52.013 "validity": 0.0 00:32:52.013 }, 00:32:52.013 { 00:32:52.013 "id": 7, 00:32:52.013 "state": "FREE", 00:32:52.013 "validity": 0.0 00:32:52.013 }, 00:32:52.013 { 00:32:52.013 "id": 8, 00:32:52.013 "state": "FREE", 00:32:52.013 "validity": 0.0 00:32:52.013 }, 00:32:52.013 { 00:32:52.013 "id": 9, 00:32:52.013 "state": "FREE", 00:32:52.013 "validity": 0.0 00:32:52.013 }, 00:32:52.013 { 00:32:52.013 "id": 10, 00:32:52.013 "state": "FREE", 00:32:52.013 "validity": 0.0 00:32:52.013 }, 00:32:52.013 { 00:32:52.013 "id": 11, 00:32:52.013 "state": "FREE", 00:32:52.013 "validity": 0.0 00:32:52.013 }, 00:32:52.013 { 00:32:52.013 "id": 12, 00:32:52.013 "state": "FREE", 00:32:52.013 "validity": 0.0 00:32:52.013 }, 00:32:52.013 { 00:32:52.013 "id": 13, 00:32:52.013 "state": "FREE", 00:32:52.013 "validity": 0.0 00:32:52.013 }, 00:32:52.013 { 00:32:52.013 "id": 14, 00:32:52.013 "state": "FREE", 00:32:52.013 "validity": 0.0 00:32:52.013 }, 00:32:52.013 { 00:32:52.013 "id": 15, 00:32:52.013 "state": "FREE", 00:32:52.013 "validity": 0.0 00:32:52.013 }, 00:32:52.013 { 00:32:52.013 "id": 16, 00:32:52.013 "state": "FREE", 00:32:52.013 "validity": 0.0 00:32:52.013 }, 00:32:52.013 { 00:32:52.013 "id": 17, 00:32:52.013 "state": "FREE", 00:32:52.013 "validity": 0.0 00:32:52.013 } 00:32:52.013 ], 00:32:52.013 "read-only": true 00:32:52.013 }, 00:32:52.013 { 00:32:52.013 "name": "cache_device", 00:32:52.013 "type": "bdev", 00:32:52.013 "chunks": [ 00:32:52.013 { 00:32:52.013 "id": 0, 00:32:52.013 "state": "INACTIVE", 00:32:52.013 "utilization": 0.0 00:32:52.013 }, 00:32:52.013 { 00:32:52.013 "id": 1, 00:32:52.013 "state": "OPEN", 00:32:52.013 "utilization": 0.0 00:32:52.013 }, 00:32:52.013 { 00:32:52.013 "id": 2, 00:32:52.013 "state": "OPEN", 00:32:52.013 "utilization": 0.0 00:32:52.013 }, 00:32:52.013 { 00:32:52.013 "id": 3, 00:32:52.013 "state": "FREE", 00:32:52.013 "utilization": 0.0 00:32:52.013 }, 00:32:52.013 { 00:32:52.013 "id": 4, 00:32:52.013 "state": "FREE", 00:32:52.013 "utilization": 0.0 00:32:52.013 } 00:32:52.013 ], 00:32:52.013 "read-only": true 00:32:52.013 }, 00:32:52.013 { 00:32:52.013 "name": "verbose_mode", 00:32:52.013 "value": true, 00:32:52.013 "unit": "", 00:32:52.013 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:32:52.013 }, 00:32:52.013 { 00:32:52.013 "name": "prep_upgrade_on_shutdown", 00:32:52.013 "value": false, 00:32:52.013 "unit": "", 00:32:52.013 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:32:52.013 } 00:32:52.013 ] 00:32:52.013 } 00:32:52.013 06:59:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:32:52.013 06:59:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:32:52.013 06:59:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:32:52.013 06:59:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:32:52.013 06:59:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:32:52.013 06:59:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:32:52.013 06:59:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:32:52.013 06:59:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:32:52.273 Validate MD5 checksum, iteration 1 00:32:52.273 06:59:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:32:52.273 06:59:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:32:52.273 06:59:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:32:52.273 06:59:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:32:52.273 06:59:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:32:52.273 06:59:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:32:52.273 06:59:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:32:52.273 06:59:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:32:52.274 06:59:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:52.274 06:59:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:52.274 06:59:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:52.274 06:59:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:52.274 06:59:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:32:52.536 [2024-12-06 06:59:05.035296] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:32:52.536 [2024-12-06 06:59:05.035756] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81495 ] 00:32:52.536 [2024-12-06 06:59:05.202613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:52.796 [2024-12-06 06:59:05.348971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:54.703  [2024-12-06T06:59:07.703Z] Copying: 598/1024 [MB] (598 MBps) [2024-12-06T06:59:09.133Z] Copying: 1024/1024 [MB] (average 593 MBps) 00:32:56.392 00:32:56.392 06:59:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:32:56.392 06:59:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:58.299 06:59:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:32:58.299 06:59:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=17e24ce6a152e80b3bca18c282613103 00:32:58.299 Validate MD5 checksum, iteration 2 00:32:58.299 06:59:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 17e24ce6a152e80b3bca18c282613103 != \1\7\e\2\4\c\e\6\a\1\5\2\e\8\0\b\3\b\c\a\1\8\c\2\8\2\6\1\3\1\0\3 ]] 00:32:58.299 06:59:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:32:58.299 06:59:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:32:58.299 06:59:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:32:58.299 06:59:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:32:58.299 06:59:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:58.299 06:59:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:58.299 06:59:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:58.299 06:59:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:58.299 06:59:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:32:58.556 [2024-12-06 06:59:11.049206] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:32:58.556 [2024-12-06 06:59:11.049636] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81562 ] 00:32:58.556 [2024-12-06 06:59:11.210638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:58.813 [2024-12-06 06:59:11.317382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:00.189  [2024-12-06T06:59:13.498Z] Copying: 703/1024 [MB] (703 MBps) [2024-12-06T06:59:14.436Z] Copying: 1024/1024 [MB] (average 690 MBps) 00:33:01.695 00:33:01.695 06:59:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:33:01.695 06:59:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:03.593 06:59:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:33:03.593 06:59:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=b89216b95c7052cc71dbfa836b834713 00:33:03.593 06:59:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ b89216b95c7052cc71dbfa836b834713 != \b\8\9\2\1\6\b\9\5\c\7\0\5\2\c\c\7\1\d\b\f\a\8\3\6\b\8\3\4\7\1\3 ]] 00:33:03.593 06:59:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:33:03.593 06:59:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:03.593 06:59:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:33:03.593 06:59:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 81402 ]] 00:33:03.593 06:59:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 81402 00:33:03.593 06:59:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:33:03.594 06:59:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:33:03.594 06:59:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:33:03.594 06:59:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:33:03.594 06:59:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:33:03.594 06:59:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=81622 00:33:03.851 06:59:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:33:03.851 06:59:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:33:03.851 06:59:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 81622 00:33:03.851 06:59:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 81622 ']' 00:33:03.851 06:59:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:03.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:03.851 06:59:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:03.851 06:59:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:03.851 06:59:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:03.851 06:59:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:33:03.851 [2024-12-06 06:59:16.400990] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:33:03.851 [2024-12-06 06:59:16.401088] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81622 ] 00:33:03.851 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 81402 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:33:03.851 [2024-12-06 06:59:16.553088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:04.108 [2024-12-06 06:59:16.648708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:04.674 [2024-12-06 06:59:17.310235] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:33:04.674 [2024-12-06 06:59:17.310525] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:33:04.935 [2024-12-06 06:59:17.458566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.935 [2024-12-06 06:59:17.458605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:33:04.935 [2024-12-06 06:59:17.458617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:33:04.935 [2024-12-06 06:59:17.458624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.935 [2024-12-06 06:59:17.458672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.935 [2024-12-06 06:59:17.458681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:33:04.935 [2024-12-06 06:59:17.458688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:33:04.935 [2024-12-06 06:59:17.458694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.935 [2024-12-06 06:59:17.458713] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:33:04.935 [2024-12-06 06:59:17.459270] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:33:04.935 [2024-12-06 06:59:17.459285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.935 [2024-12-06 06:59:17.459293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:33:04.935 [2024-12-06 06:59:17.459300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.580 ms 00:33:04.935 [2024-12-06 06:59:17.459306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.935 [2024-12-06 06:59:17.459614] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:33:04.935 [2024-12-06 06:59:17.473341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.935 [2024-12-06 06:59:17.473373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:33:04.935 [2024-12-06 06:59:17.473384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.728 ms 00:33:04.935 [2024-12-06 06:59:17.473392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.935 [2024-12-06 06:59:17.480783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.935 [2024-12-06 06:59:17.480811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:33:04.935 [2024-12-06 06:59:17.480820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:33:04.935 [2024-12-06 06:59:17.480826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.935 [2024-12-06 06:59:17.481089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.935 [2024-12-06 06:59:17.481100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:33:04.935 [2024-12-06 06:59:17.481107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.201 ms 00:33:04.935 [2024-12-06 06:59:17.481113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.936 [2024-12-06 06:59:17.481157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.936 [2024-12-06 06:59:17.481165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:33:04.936 [2024-12-06 06:59:17.481172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:33:04.936 [2024-12-06 06:59:17.481178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.936 [2024-12-06 06:59:17.481197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.936 [2024-12-06 06:59:17.481205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:33:04.936 [2024-12-06 06:59:17.481211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:33:04.936 [2024-12-06 06:59:17.481217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.936 [2024-12-06 06:59:17.481234] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:33:04.936 [2024-12-06 06:59:17.483898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.936 [2024-12-06 06:59:17.483925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:33:04.936 [2024-12-06 06:59:17.483933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.668 ms 00:33:04.936 [2024-12-06 06:59:17.483939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.936 [2024-12-06 06:59:17.483968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.936 [2024-12-06 06:59:17.483976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:33:04.936 [2024-12-06 06:59:17.483983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:33:04.936 [2024-12-06 06:59:17.483989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.936 [2024-12-06 06:59:17.484004] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:33:04.936 [2024-12-06 06:59:17.484021] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:33:04.936 [2024-12-06 06:59:17.484049] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:33:04.936 [2024-12-06 06:59:17.484063] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:33:04.936 [2024-12-06 06:59:17.484148] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:33:04.936 [2024-12-06 06:59:17.484158] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:33:04.936 [2024-12-06 06:59:17.484166] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:33:04.936 [2024-12-06 06:59:17.484174] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:33:04.936 [2024-12-06 06:59:17.484181] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:33:04.936 [2024-12-06 06:59:17.484188] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:33:04.936 [2024-12-06 06:59:17.484195] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:33:04.936 [2024-12-06 06:59:17.484202] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:33:04.936 [2024-12-06 06:59:17.484208] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:33:04.936 [2024-12-06 06:59:17.484216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.936 [2024-12-06 06:59:17.484223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:33:04.936 [2024-12-06 06:59:17.484232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.214 ms 00:33:04.936 [2024-12-06 06:59:17.484238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.936 [2024-12-06 06:59:17.484305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.936 [2024-12-06 06:59:17.484312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:33:04.936 [2024-12-06 06:59:17.484319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:33:04.936 [2024-12-06 06:59:17.484328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.936 [2024-12-06 06:59:17.484406] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:33:04.936 [2024-12-06 06:59:17.484416] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:33:04.936 [2024-12-06 06:59:17.484427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:33:04.936 [2024-12-06 06:59:17.484436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:04.936 [2024-12-06 06:59:17.484443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:33:04.936 [2024-12-06 06:59:17.484449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:33:04.936 [2024-12-06 06:59:17.484455] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:33:04.936 [2024-12-06 06:59:17.484477] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:33:04.936 [2024-12-06 06:59:17.484485] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:33:04.936 [2024-12-06 06:59:17.484491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:04.936 [2024-12-06 06:59:17.484498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:33:04.936 [2024-12-06 06:59:17.484504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:33:04.936 [2024-12-06 06:59:17.484510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:04.936 [2024-12-06 06:59:17.484516] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:33:04.936 [2024-12-06 06:59:17.484521] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:33:04.936 [2024-12-06 06:59:17.484526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:04.936 [2024-12-06 06:59:17.484532] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:33:04.936 [2024-12-06 06:59:17.484537] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:33:04.936 [2024-12-06 06:59:17.484542] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:04.936 [2024-12-06 06:59:17.484548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:33:04.936 [2024-12-06 06:59:17.484554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:33:04.936 [2024-12-06 06:59:17.484564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:04.936 [2024-12-06 06:59:17.484570] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:33:04.936 [2024-12-06 06:59:17.484575] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:33:04.936 [2024-12-06 06:59:17.484580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:04.936 [2024-12-06 06:59:17.484585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:33:04.936 [2024-12-06 06:59:17.484591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:33:04.936 [2024-12-06 06:59:17.484596] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:04.936 [2024-12-06 06:59:17.484602] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:33:04.936 [2024-12-06 06:59:17.484607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:33:04.936 [2024-12-06 06:59:17.484612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:04.936 [2024-12-06 06:59:17.484617] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:33:04.936 [2024-12-06 06:59:17.484622] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:33:04.936 [2024-12-06 06:59:17.484628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:04.936 [2024-12-06 06:59:17.484633] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:33:04.936 [2024-12-06 06:59:17.484638] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:33:04.936 [2024-12-06 06:59:17.484651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:04.936 [2024-12-06 06:59:17.484657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:33:04.936 [2024-12-06 06:59:17.484662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:33:04.936 [2024-12-06 06:59:17.484666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:04.936 [2024-12-06 06:59:17.484671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:33:04.936 [2024-12-06 06:59:17.484676] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:33:04.936 [2024-12-06 06:59:17.484683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:04.936 [2024-12-06 06:59:17.484688] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:33:04.936 [2024-12-06 06:59:17.484695] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:33:04.936 [2024-12-06 06:59:17.484701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:33:04.936 [2024-12-06 06:59:17.484707] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:04.936 [2024-12-06 06:59:17.484712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:33:04.936 [2024-12-06 06:59:17.484717] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:33:04.936 [2024-12-06 06:59:17.484722] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:33:04.936 [2024-12-06 06:59:17.484728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:33:04.936 [2024-12-06 06:59:17.484732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:33:04.936 [2024-12-06 06:59:17.484738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:33:04.936 [2024-12-06 06:59:17.484744] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:33:04.937 [2024-12-06 06:59:17.484751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:04.937 [2024-12-06 06:59:17.484758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:33:04.937 [2024-12-06 06:59:17.484764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:33:04.937 [2024-12-06 06:59:17.484769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:33:04.937 [2024-12-06 06:59:17.484775] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:33:04.937 [2024-12-06 06:59:17.484780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:33:04.937 [2024-12-06 06:59:17.484785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:33:04.937 [2024-12-06 06:59:17.484790] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:33:04.937 [2024-12-06 06:59:17.484795] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:33:04.937 [2024-12-06 06:59:17.484801] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:33:04.937 [2024-12-06 06:59:17.484806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:33:04.937 [2024-12-06 06:59:17.484812] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:33:04.937 [2024-12-06 06:59:17.484817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:33:04.937 [2024-12-06 06:59:17.484822] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:33:04.937 [2024-12-06 06:59:17.484828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:33:04.937 [2024-12-06 06:59:17.484833] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:33:04.937 [2024-12-06 06:59:17.484838] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:04.937 [2024-12-06 06:59:17.484846] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:04.937 [2024-12-06 06:59:17.484852] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:33:04.937 [2024-12-06 06:59:17.484857] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:33:04.937 [2024-12-06 06:59:17.484868] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:33:04.937 [2024-12-06 06:59:17.484873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.937 [2024-12-06 06:59:17.484879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:33:04.937 [2024-12-06 06:59:17.484886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.521 ms 00:33:04.937 [2024-12-06 06:59:17.484891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.937 [2024-12-06 06:59:17.507622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.937 [2024-12-06 06:59:17.507652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:33:04.937 [2024-12-06 06:59:17.507662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.675 ms 00:33:04.937 [2024-12-06 06:59:17.507669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.937 [2024-12-06 06:59:17.507705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.937 [2024-12-06 06:59:17.507713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:33:04.937 [2024-12-06 06:59:17.507719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:33:04.937 [2024-12-06 06:59:17.507725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.937 [2024-12-06 06:59:17.536266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.937 [2024-12-06 06:59:17.536295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:33:04.937 [2024-12-06 06:59:17.536305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 28.495 ms 00:33:04.937 [2024-12-06 06:59:17.536312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.937 [2024-12-06 06:59:17.536337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.937 [2024-12-06 06:59:17.536344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:33:04.937 [2024-12-06 06:59:17.536351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:33:04.937 [2024-12-06 06:59:17.536360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.937 [2024-12-06 06:59:17.536441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.937 [2024-12-06 06:59:17.536451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:33:04.937 [2024-12-06 06:59:17.536459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:33:04.937 [2024-12-06 06:59:17.536481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.937 [2024-12-06 06:59:17.536517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.937 [2024-12-06 06:59:17.536525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:33:04.937 [2024-12-06 06:59:17.536531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:33:04.937 [2024-12-06 06:59:17.536538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.937 [2024-12-06 06:59:17.550013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.937 [2024-12-06 06:59:17.550038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:33:04.937 [2024-12-06 06:59:17.550046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.454 ms 00:33:04.937 [2024-12-06 06:59:17.550053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.937 [2024-12-06 06:59:17.550139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.937 [2024-12-06 06:59:17.550148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:33:04.937 [2024-12-06 06:59:17.550155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:33:04.937 [2024-12-06 06:59:17.550161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.937 [2024-12-06 06:59:17.575962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.937 [2024-12-06 06:59:17.575995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:33:04.937 [2024-12-06 06:59:17.576005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.785 ms 00:33:04.937 [2024-12-06 06:59:17.576012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.937 [2024-12-06 06:59:17.583487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.937 [2024-12-06 06:59:17.583519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:33:04.937 [2024-12-06 06:59:17.583540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.508 ms 00:33:04.937 [2024-12-06 06:59:17.583550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.937 [2024-12-06 06:59:17.632639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.937 [2024-12-06 06:59:17.632685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:33:04.937 [2024-12-06 06:59:17.632697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 49.029 ms 00:33:04.937 [2024-12-06 06:59:17.632704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.937 [2024-12-06 06:59:17.632849] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:33:04.937 [2024-12-06 06:59:17.632957] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:33:04.937 [2024-12-06 06:59:17.633065] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:33:04.937 [2024-12-06 06:59:17.633169] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:33:04.937 [2024-12-06 06:59:17.633179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.937 [2024-12-06 06:59:17.633186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:33:04.937 [2024-12-06 06:59:17.633193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.438 ms 00:33:04.937 [2024-12-06 06:59:17.633200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.937 [2024-12-06 06:59:17.633246] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:33:04.937 [2024-12-06 06:59:17.633256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.937 [2024-12-06 06:59:17.633266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:33:04.937 [2024-12-06 06:59:17.633274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:33:04.937 [2024-12-06 06:59:17.633280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.937 [2024-12-06 06:59:17.645387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.937 [2024-12-06 06:59:17.645420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:33:04.937 [2024-12-06 06:59:17.645429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.086 ms 00:33:04.937 [2024-12-06 06:59:17.645436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.937 [2024-12-06 06:59:17.652328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.937 [2024-12-06 06:59:17.652361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:33:04.937 [2024-12-06 06:59:17.652373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:33:04.937 [2024-12-06 06:59:17.652383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.937 [2024-12-06 06:59:17.652489] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:33:04.937 [2024-12-06 06:59:17.652665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.937 [2024-12-06 06:59:17.652685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:33:04.937 [2024-12-06 06:59:17.652698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.177 ms 00:33:04.937 [2024-12-06 06:59:17.652708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:05.509 [2024-12-06 06:59:18.236624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:05.509 [2024-12-06 06:59:18.236724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:33:05.509 [2024-12-06 06:59:18.236743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 583.006 ms 00:33:05.509 [2024-12-06 06:59:18.236753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:05.509 [2024-12-06 06:59:18.241833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:05.509 [2024-12-06 06:59:18.241877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:33:05.509 [2024-12-06 06:59:18.241889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.787 ms 00:33:05.509 [2024-12-06 06:59:18.241900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:05.509 [2024-12-06 06:59:18.242864] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:33:05.509 [2024-12-06 06:59:18.242904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:05.509 [2024-12-06 06:59:18.242914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:33:05.509 [2024-12-06 06:59:18.242925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.966 ms 00:33:05.509 [2024-12-06 06:59:18.242934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:05.509 [2024-12-06 06:59:18.242985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:05.509 [2024-12-06 06:59:18.242997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:33:05.509 [2024-12-06 06:59:18.243007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:33:05.509 [2024-12-06 06:59:18.243021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:05.509 [2024-12-06 06:59:18.243057] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 590.580 ms, result 0 00:33:05.509 [2024-12-06 06:59:18.243102] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:33:05.509 [2024-12-06 06:59:18.243252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:05.509 [2024-12-06 06:59:18.243265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:33:05.509 [2024-12-06 06:59:18.243274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.152 ms 00:33:05.509 [2024-12-06 06:59:18.243282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:06.448 [2024-12-06 06:59:18.946671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:06.448 [2024-12-06 06:59:18.947024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:33:06.448 [2024-12-06 06:59:18.947061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 702.215 ms 00:33:06.448 [2024-12-06 06:59:18.947071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:06.448 [2024-12-06 06:59:18.952683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:06.448 [2024-12-06 06:59:18.952720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:33:06.448 [2024-12-06 06:59:18.952732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.747 ms 00:33:06.448 [2024-12-06 06:59:18.952740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:06.448 [2024-12-06 06:59:18.953748] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:33:06.448 [2024-12-06 06:59:18.953819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:06.448 [2024-12-06 06:59:18.953829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:33:06.448 [2024-12-06 06:59:18.953839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.051 ms 00:33:06.448 [2024-12-06 06:59:18.953846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:06.448 [2024-12-06 06:59:18.953880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:06.448 [2024-12-06 06:59:18.953894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:33:06.448 [2024-12-06 06:59:18.953903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:06.448 [2024-12-06 06:59:18.953911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:06.448 [2024-12-06 06:59:18.953951] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 710.842 ms, result 0 00:33:06.448 [2024-12-06 06:59:18.953997] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:33:06.448 [2024-12-06 06:59:18.954009] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:33:06.448 [2024-12-06 06:59:18.954019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:06.448 [2024-12-06 06:59:18.954027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:33:06.448 [2024-12-06 06:59:18.954036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1301.564 ms 00:33:06.448 [2024-12-06 06:59:18.954049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:06.448 [2024-12-06 06:59:18.954078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:06.448 [2024-12-06 06:59:18.954091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:33:06.448 [2024-12-06 06:59:18.954104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:33:06.448 [2024-12-06 06:59:18.954112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:06.448 [2024-12-06 06:59:18.966625] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:33:06.448 [2024-12-06 06:59:18.966734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:06.448 [2024-12-06 06:59:18.966745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:33:06.448 [2024-12-06 06:59:18.966756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.606 ms 00:33:06.448 [2024-12-06 06:59:18.966765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:06.448 [2024-12-06 06:59:18.967504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:06.448 [2024-12-06 06:59:18.967528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:33:06.448 [2024-12-06 06:59:18.967542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.644 ms 00:33:06.448 [2024-12-06 06:59:18.967550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:06.448 [2024-12-06 06:59:18.970128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:06.448 [2024-12-06 06:59:18.970154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:33:06.448 [2024-12-06 06:59:18.970165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.560 ms 00:33:06.448 [2024-12-06 06:59:18.970173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:06.448 [2024-12-06 06:59:18.970215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:06.448 [2024-12-06 06:59:18.970224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:33:06.448 [2024-12-06 06:59:18.970233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:33:06.448 [2024-12-06 06:59:18.970245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:06.448 [2024-12-06 06:59:18.970360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:06.448 [2024-12-06 06:59:18.970371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:33:06.448 [2024-12-06 06:59:18.970380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:33:06.448 [2024-12-06 06:59:18.970388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:06.448 [2024-12-06 06:59:18.970411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:06.448 [2024-12-06 06:59:18.970419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:33:06.448 [2024-12-06 06:59:18.970428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:33:06.448 [2024-12-06 06:59:18.970435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:06.448 [2024-12-06 06:59:18.970490] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:33:06.448 [2024-12-06 06:59:18.970502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:06.448 [2024-12-06 06:59:18.970510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:33:06.448 [2024-12-06 06:59:18.970518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:33:06.448 [2024-12-06 06:59:18.970526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:06.448 [2024-12-06 06:59:18.970575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:06.448 [2024-12-06 06:59:18.970685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:33:06.448 [2024-12-06 06:59:18.970695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:33:06.448 [2024-12-06 06:59:18.970703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:06.448 [2024-12-06 06:59:18.972052] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1513.018 ms, result 0 00:33:06.448 [2024-12-06 06:59:18.987497] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:06.448 [2024-12-06 06:59:19.003486] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:33:06.448 [2024-12-06 06:59:19.012084] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:06.448 Validate MD5 checksum, iteration 1 00:33:06.448 06:59:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:06.448 06:59:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:33:06.448 06:59:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:33:06.448 06:59:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:33:06.448 06:59:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:33:06.448 06:59:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:33:06.448 06:59:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:33:06.448 06:59:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:06.448 06:59:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:33:06.448 06:59:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:33:06.448 06:59:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:06.448 06:59:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:06.448 06:59:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:06.448 06:59:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:06.448 06:59:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:33:06.448 [2024-12-06 06:59:19.117318] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:33:06.448 [2024-12-06 06:59:19.117812] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81658 ] 00:33:06.708 [2024-12-06 06:59:19.282298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:06.708 [2024-12-06 06:59:19.434478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:08.614  [2024-12-06T06:59:21.920Z] Copying: 572/1024 [MB] (572 MBps) [2024-12-06T06:59:22.853Z] Copying: 1024/1024 [MB] (average 608 MBps) 00:33:10.112 00:33:10.112 06:59:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:33:10.112 06:59:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:12.007 06:59:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:33:12.007 Validate MD5 checksum, iteration 2 00:33:12.007 06:59:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=17e24ce6a152e80b3bca18c282613103 00:33:12.007 06:59:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 17e24ce6a152e80b3bca18c282613103 != \1\7\e\2\4\c\e\6\a\1\5\2\e\8\0\b\3\b\c\a\1\8\c\2\8\2\6\1\3\1\0\3 ]] 00:33:12.007 06:59:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:33:12.007 06:59:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:12.008 06:59:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:33:12.008 06:59:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:33:12.008 06:59:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:12.008 06:59:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:12.008 06:59:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:12.008 06:59:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:12.008 06:59:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:33:12.265 [2024-12-06 06:59:24.781127] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:33:12.265 [2024-12-06 06:59:24.781226] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81719 ] 00:33:12.265 [2024-12-06 06:59:24.938435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:12.522 [2024-12-06 06:59:25.049667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:13.897  [2024-12-06T06:59:27.201Z] Copying: 625/1024 [MB] (625 MBps) [2024-12-06T06:59:28.134Z] Copying: 1024/1024 [MB] (average 676 MBps) 00:33:15.393 00:33:15.393 06:59:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:33:15.393 06:59:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:17.950 06:59:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:33:17.950 06:59:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=b89216b95c7052cc71dbfa836b834713 00:33:17.950 06:59:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ b89216b95c7052cc71dbfa836b834713 != \b\8\9\2\1\6\b\9\5\c\7\0\5\2\c\c\7\1\d\b\f\a\8\3\6\b\8\3\4\7\1\3 ]] 00:33:17.950 06:59:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:33:17.950 06:59:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:17.950 06:59:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:33:17.950 06:59:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:33:17.950 06:59:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:33:17.950 06:59:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:17.950 06:59:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:33:17.950 06:59:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:33:17.950 06:59:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:33:17.950 06:59:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:33:17.950 06:59:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 81622 ]] 00:33:17.950 06:59:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 81622 00:33:17.950 06:59:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 81622 ']' 00:33:17.950 06:59:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 81622 00:33:17.950 06:59:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:33:17.950 06:59:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:17.950 06:59:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81622 00:33:17.950 killing process with pid 81622 00:33:17.950 06:59:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:17.950 06:59:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:17.950 06:59:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81622' 00:33:17.950 06:59:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 81622 00:33:17.950 06:59:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 81622 00:33:18.210 [2024-12-06 06:59:30.855252] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:33:18.210 [2024-12-06 06:59:30.867824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:18.210 [2024-12-06 06:59:30.867860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:33:18.210 [2024-12-06 06:59:30.867872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:33:18.210 [2024-12-06 06:59:30.867879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:18.210 [2024-12-06 06:59:30.867897] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:33:18.210 [2024-12-06 06:59:30.870107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:18.210 [2024-12-06 06:59:30.870136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:33:18.210 [2024-12-06 06:59:30.870146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.198 ms 00:33:18.210 [2024-12-06 06:59:30.870152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:18.210 [2024-12-06 06:59:30.870347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:18.210 [2024-12-06 06:59:30.870357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:33:18.210 [2024-12-06 06:59:30.870364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.176 ms 00:33:18.210 [2024-12-06 06:59:30.870371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:18.210 [2024-12-06 06:59:30.871562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:18.210 [2024-12-06 06:59:30.871585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:33:18.210 [2024-12-06 06:59:30.871593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.178 ms 00:33:18.211 [2024-12-06 06:59:30.871603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:18.211 [2024-12-06 06:59:30.872486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:18.211 [2024-12-06 06:59:30.872503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:33:18.211 [2024-12-06 06:59:30.872512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.857 ms 00:33:18.211 [2024-12-06 06:59:30.872519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:18.211 [2024-12-06 06:59:30.880916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:18.211 [2024-12-06 06:59:30.880943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:33:18.211 [2024-12-06 06:59:30.880956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.370 ms 00:33:18.211 [2024-12-06 06:59:30.880963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:18.211 [2024-12-06 06:59:30.885574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:18.211 [2024-12-06 06:59:30.885698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:33:18.211 [2024-12-06 06:59:30.885712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.581 ms 00:33:18.211 [2024-12-06 06:59:30.885719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:18.211 [2024-12-06 06:59:30.885792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:18.211 [2024-12-06 06:59:30.885800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:33:18.211 [2024-12-06 06:59:30.885808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:33:18.211 [2024-12-06 06:59:30.885818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:18.211 [2024-12-06 06:59:30.893920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:18.211 [2024-12-06 06:59:30.893945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:33:18.211 [2024-12-06 06:59:30.893953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.089 ms 00:33:18.211 [2024-12-06 06:59:30.893959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:18.211 [2024-12-06 06:59:30.901945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:18.211 [2024-12-06 06:59:30.901969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:33:18.211 [2024-12-06 06:59:30.901976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.960 ms 00:33:18.211 [2024-12-06 06:59:30.901982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:18.211 [2024-12-06 06:59:30.909881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:18.211 [2024-12-06 06:59:30.909905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:33:18.211 [2024-12-06 06:59:30.909912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.873 ms 00:33:18.211 [2024-12-06 06:59:30.909918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:18.211 [2024-12-06 06:59:30.917483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:18.211 [2024-12-06 06:59:30.917507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:33:18.211 [2024-12-06 06:59:30.917514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.518 ms 00:33:18.211 [2024-12-06 06:59:30.917520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:18.211 [2024-12-06 06:59:30.917544] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:33:18.211 [2024-12-06 06:59:30.917557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:33:18.211 [2024-12-06 06:59:30.917565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:33:18.211 [2024-12-06 06:59:30.917572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:33:18.211 [2024-12-06 06:59:30.917578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:18.211 [2024-12-06 06:59:30.917585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:18.211 [2024-12-06 06:59:30.917591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:18.211 [2024-12-06 06:59:30.917598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:18.211 [2024-12-06 06:59:30.917603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:18.211 [2024-12-06 06:59:30.917609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:18.211 [2024-12-06 06:59:30.917615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:18.211 [2024-12-06 06:59:30.917621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:18.211 [2024-12-06 06:59:30.917627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:18.211 [2024-12-06 06:59:30.917633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:18.211 [2024-12-06 06:59:30.917639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:18.211 [2024-12-06 06:59:30.917645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:18.211 [2024-12-06 06:59:30.917650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:18.211 [2024-12-06 06:59:30.917656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:18.211 [2024-12-06 06:59:30.917662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:18.211 [2024-12-06 06:59:30.917671] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:33:18.211 [2024-12-06 06:59:30.917677] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: c0a232f0-633a-42fa-842d-8eaa31e18778 00:33:18.211 [2024-12-06 06:59:30.917683] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:33:18.211 [2024-12-06 06:59:30.917689] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:33:18.211 [2024-12-06 06:59:30.917695] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:33:18.211 [2024-12-06 06:59:30.917702] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:33:18.211 [2024-12-06 06:59:30.917708] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:33:18.211 [2024-12-06 06:59:30.917714] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:33:18.211 [2024-12-06 06:59:30.917723] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:33:18.211 [2024-12-06 06:59:30.917728] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:33:18.211 [2024-12-06 06:59:30.917734] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:33:18.211 [2024-12-06 06:59:30.917742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:18.211 [2024-12-06 06:59:30.917749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:33:18.211 [2024-12-06 06:59:30.917755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.198 ms 00:33:18.211 [2024-12-06 06:59:30.917761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:18.211 [2024-12-06 06:59:30.927868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:18.211 [2024-12-06 06:59:30.927892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:33:18.211 [2024-12-06 06:59:30.927901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.084 ms 00:33:18.211 [2024-12-06 06:59:30.927908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:18.211 [2024-12-06 06:59:30.928206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:18.211 [2024-12-06 06:59:30.928213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:33:18.211 [2024-12-06 06:59:30.928220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.279 ms 00:33:18.211 [2024-12-06 06:59:30.928226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:18.470 [2024-12-06 06:59:30.963587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:18.470 [2024-12-06 06:59:30.963614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:33:18.470 [2024-12-06 06:59:30.963623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:18.470 [2024-12-06 06:59:30.963634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:18.470 [2024-12-06 06:59:30.963657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:18.470 [2024-12-06 06:59:30.963664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:33:18.470 [2024-12-06 06:59:30.963671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:18.470 [2024-12-06 06:59:30.963677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:18.470 [2024-12-06 06:59:30.963750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:18.470 [2024-12-06 06:59:30.963759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:33:18.470 [2024-12-06 06:59:30.963766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:18.470 [2024-12-06 06:59:30.963773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:18.470 [2024-12-06 06:59:30.963790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:18.470 [2024-12-06 06:59:30.963797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:33:18.470 [2024-12-06 06:59:30.963803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:18.470 [2024-12-06 06:59:30.963810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:18.470 [2024-12-06 06:59:31.027723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:18.470 [2024-12-06 06:59:31.027754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:33:18.470 [2024-12-06 06:59:31.027764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:18.470 [2024-12-06 06:59:31.027770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:18.470 [2024-12-06 06:59:31.079678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:18.470 [2024-12-06 06:59:31.079713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:33:18.470 [2024-12-06 06:59:31.079722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:18.470 [2024-12-06 06:59:31.079729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:18.471 [2024-12-06 06:59:31.079794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:18.471 [2024-12-06 06:59:31.079802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:33:18.471 [2024-12-06 06:59:31.079809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:18.471 [2024-12-06 06:59:31.079815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:18.471 [2024-12-06 06:59:31.079864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:18.471 [2024-12-06 06:59:31.079883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:33:18.471 [2024-12-06 06:59:31.079890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:18.471 [2024-12-06 06:59:31.079896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:18.471 [2024-12-06 06:59:31.079980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:18.471 [2024-12-06 06:59:31.079988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:33:18.471 [2024-12-06 06:59:31.079996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:18.471 [2024-12-06 06:59:31.080002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:18.471 [2024-12-06 06:59:31.080030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:18.471 [2024-12-06 06:59:31.080037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:33:18.471 [2024-12-06 06:59:31.080046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:18.471 [2024-12-06 06:59:31.080052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:18.471 [2024-12-06 06:59:31.080085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:18.471 [2024-12-06 06:59:31.080093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:33:18.471 [2024-12-06 06:59:31.080099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:18.471 [2024-12-06 06:59:31.080105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:18.471 [2024-12-06 06:59:31.080145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:18.471 [2024-12-06 06:59:31.080155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:33:18.471 [2024-12-06 06:59:31.080162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:18.471 [2024-12-06 06:59:31.080169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:18.471 [2024-12-06 06:59:31.080277] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 212.424 ms, result 0 00:33:19.406 06:59:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:33:19.406 06:59:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:33:19.406 06:59:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:33:19.406 06:59:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:33:19.406 06:59:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:33:19.406 06:59:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:33:19.406 Remove shared memory files 00:33:19.406 06:59:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:33:19.406 06:59:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:33:19.406 06:59:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:33:19.406 06:59:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:33:19.406 06:59:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid81402 00:33:19.406 06:59:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:33:19.406 06:59:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:33:19.406 ************************************ 00:33:19.406 END TEST ftl_upgrade_shutdown 00:33:19.406 ************************************ 00:33:19.406 00:33:19.406 real 1m24.103s 00:33:19.406 user 1m55.567s 00:33:19.406 sys 0m19.614s 00:33:19.406 06:59:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:19.406 06:59:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:33:19.406 06:59:31 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:33:19.406 06:59:31 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:33:19.406 06:59:31 ftl -- ftl/ftl.sh@14 -- # killprocess 75398 00:33:19.406 06:59:31 ftl -- common/autotest_common.sh@954 -- # '[' -z 75398 ']' 00:33:19.406 Process with pid 75398 is not found 00:33:19.406 06:59:31 ftl -- common/autotest_common.sh@958 -- # kill -0 75398 00:33:19.406 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (75398) - No such process 00:33:19.406 06:59:31 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 75398 is not found' 00:33:19.406 06:59:31 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:33:19.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:19.406 06:59:31 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=81829 00:33:19.406 06:59:31 ftl -- ftl/ftl.sh@20 -- # waitforlisten 81829 00:33:19.406 06:59:31 ftl -- common/autotest_common.sh@835 -- # '[' -z 81829 ']' 00:33:19.406 06:59:31 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:19.406 06:59:31 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:19.406 06:59:31 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:19.406 06:59:31 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:19.406 06:59:31 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:19.406 06:59:31 ftl -- common/autotest_common.sh@10 -- # set +x 00:33:19.406 [2024-12-06 06:59:31.984686] Starting SPDK v25.01-pre git sha1 0b1b15acc / DPDK 24.03.0 initialization... 00:33:19.407 [2024-12-06 06:59:31.984905] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81829 ] 00:33:19.407 [2024-12-06 06:59:32.137601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:19.666 [2024-12-06 06:59:32.230666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:20.232 06:59:32 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:20.232 06:59:32 ftl -- common/autotest_common.sh@868 -- # return 0 00:33:20.232 06:59:32 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:33:20.491 nvme0n1 00:33:20.491 06:59:33 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:33:20.491 06:59:33 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:33:20.491 06:59:33 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:20.749 06:59:33 ftl -- ftl/common.sh@28 -- # stores=6020fe85-6c68-4145-806e-d2d5cc645f20 00:33:20.749 06:59:33 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:33:20.749 06:59:33 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6020fe85-6c68-4145-806e-d2d5cc645f20 00:33:21.007 06:59:33 ftl -- ftl/ftl.sh@23 -- # killprocess 81829 00:33:21.007 06:59:33 ftl -- common/autotest_common.sh@954 -- # '[' -z 81829 ']' 00:33:21.007 06:59:33 ftl -- common/autotest_common.sh@958 -- # kill -0 81829 00:33:21.007 06:59:33 ftl -- common/autotest_common.sh@959 -- # uname 00:33:21.007 06:59:33 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:21.007 06:59:33 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81829 00:33:21.007 killing process with pid 81829 00:33:21.007 06:59:33 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:21.007 06:59:33 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:21.007 06:59:33 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81829' 00:33:21.007 06:59:33 ftl -- common/autotest_common.sh@973 -- # kill 81829 00:33:21.007 06:59:33 ftl -- common/autotest_common.sh@978 -- # wait 81829 00:33:22.385 06:59:34 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:33:22.645 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:22.645 Waiting for block devices as requested 00:33:22.645 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:33:22.645 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:33:22.904 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:33:22.904 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:33:28.200 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:33:28.200 Remove shared memory files 00:33:28.200 06:59:40 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:33:28.200 06:59:40 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:33:28.200 06:59:40 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:33:28.200 06:59:40 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:33:28.200 06:59:40 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:33:28.200 06:59:40 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:33:28.200 06:59:40 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:33:28.200 ************************************ 00:33:28.200 END TEST ftl 00:33:28.200 ************************************ 00:33:28.200 00:33:28.200 real 9m39.428s 00:33:28.200 user 11m46.954s 00:33:28.200 sys 1m18.612s 00:33:28.200 06:59:40 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:28.200 06:59:40 ftl -- common/autotest_common.sh@10 -- # set +x 00:33:28.200 06:59:40 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:33:28.200 06:59:40 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:33:28.200 06:59:40 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:33:28.200 06:59:40 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:33:28.200 06:59:40 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:33:28.200 06:59:40 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:33:28.200 06:59:40 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:33:28.200 06:59:40 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:33:28.200 06:59:40 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:33:28.200 06:59:40 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:33:28.200 06:59:40 -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:28.200 06:59:40 -- common/autotest_common.sh@10 -- # set +x 00:33:28.200 06:59:40 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:33:28.200 06:59:40 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:33:28.200 06:59:40 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:33:28.200 06:59:40 -- common/autotest_common.sh@10 -- # set +x 00:33:29.581 INFO: APP EXITING 00:33:29.581 INFO: killing all VMs 00:33:29.581 INFO: killing vhost app 00:33:29.581 INFO: EXIT DONE 00:33:29.841 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:30.415 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:33:30.415 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:33:30.415 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:33:30.415 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:33:30.988 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:31.249 Cleaning 00:33:31.249 Removing: /var/run/dpdk/spdk0/config 00:33:31.249 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:31.249 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:31.249 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:31.249 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:31.249 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:31.249 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:31.249 Removing: /var/run/dpdk/spdk0 00:33:31.249 Removing: /var/run/dpdk/spdk_pid57029 00:33:31.249 Removing: /var/run/dpdk/spdk_pid57231 00:33:31.249 Removing: /var/run/dpdk/spdk_pid57444 00:33:31.249 Removing: /var/run/dpdk/spdk_pid57537 00:33:31.249 Removing: /var/run/dpdk/spdk_pid57576 00:33:31.249 Removing: /var/run/dpdk/spdk_pid57693 00:33:31.249 Removing: /var/run/dpdk/spdk_pid57711 00:33:31.249 Removing: /var/run/dpdk/spdk_pid57905 00:33:31.249 Removing: /var/run/dpdk/spdk_pid57998 00:33:31.249 Removing: /var/run/dpdk/spdk_pid58094 00:33:31.249 Removing: /var/run/dpdk/spdk_pid58205 00:33:31.249 Removing: /var/run/dpdk/spdk_pid58302 00:33:31.249 Removing: /var/run/dpdk/spdk_pid58336 00:33:31.249 Removing: /var/run/dpdk/spdk_pid58378 00:33:31.249 Removing: /var/run/dpdk/spdk_pid58448 00:33:31.249 Removing: /var/run/dpdk/spdk_pid58538 00:33:31.249 Removing: /var/run/dpdk/spdk_pid58985 00:33:31.249 Removing: /var/run/dpdk/spdk_pid59049 00:33:31.249 Removing: /var/run/dpdk/spdk_pid59107 00:33:31.249 Removing: /var/run/dpdk/spdk_pid59117 00:33:31.249 Removing: /var/run/dpdk/spdk_pid59235 00:33:31.249 Removing: /var/run/dpdk/spdk_pid59246 00:33:31.249 Removing: /var/run/dpdk/spdk_pid59365 00:33:31.249 Removing: /var/run/dpdk/spdk_pid59381 00:33:31.249 Removing: /var/run/dpdk/spdk_pid59445 00:33:31.249 Removing: /var/run/dpdk/spdk_pid59463 00:33:31.249 Removing: /var/run/dpdk/spdk_pid59521 00:33:31.249 Removing: /var/run/dpdk/spdk_pid59545 00:33:31.249 Removing: /var/run/dpdk/spdk_pid59729 00:33:31.249 Removing: /var/run/dpdk/spdk_pid59771 00:33:31.249 Removing: /var/run/dpdk/spdk_pid59849 00:33:31.249 Removing: /var/run/dpdk/spdk_pid60032 00:33:31.249 Removing: /var/run/dpdk/spdk_pid60127 00:33:31.249 Removing: /var/run/dpdk/spdk_pid60169 00:33:31.249 Removing: /var/run/dpdk/spdk_pid60625 00:33:31.249 Removing: /var/run/dpdk/spdk_pid60723 00:33:31.249 Removing: /var/run/dpdk/spdk_pid60832 00:33:31.249 Removing: /var/run/dpdk/spdk_pid60885 00:33:31.249 Removing: /var/run/dpdk/spdk_pid60916 00:33:31.249 Removing: /var/run/dpdk/spdk_pid61000 00:33:31.249 Removing: /var/run/dpdk/spdk_pid61624 00:33:31.249 Removing: /var/run/dpdk/spdk_pid61662 00:33:31.249 Removing: /var/run/dpdk/spdk_pid62124 00:33:31.249 Removing: /var/run/dpdk/spdk_pid62218 00:33:31.511 Removing: /var/run/dpdk/spdk_pid62334 00:33:31.512 Removing: /var/run/dpdk/spdk_pid62387 00:33:31.512 Removing: /var/run/dpdk/spdk_pid62413 00:33:31.512 Removing: /var/run/dpdk/spdk_pid62438 00:33:31.512 Removing: /var/run/dpdk/spdk_pid64278 00:33:31.512 Removing: /var/run/dpdk/spdk_pid64415 00:33:31.512 Removing: /var/run/dpdk/spdk_pid64419 00:33:31.512 Removing: /var/run/dpdk/spdk_pid64431 00:33:31.512 Removing: /var/run/dpdk/spdk_pid64480 00:33:31.512 Removing: /var/run/dpdk/spdk_pid64484 00:33:31.512 Removing: /var/run/dpdk/spdk_pid64496 00:33:31.512 Removing: /var/run/dpdk/spdk_pid64541 00:33:31.512 Removing: /var/run/dpdk/spdk_pid64545 00:33:31.512 Removing: /var/run/dpdk/spdk_pid64557 00:33:31.512 Removing: /var/run/dpdk/spdk_pid64602 00:33:31.512 Removing: /var/run/dpdk/spdk_pid64606 00:33:31.512 Removing: /var/run/dpdk/spdk_pid64618 00:33:31.512 Removing: /var/run/dpdk/spdk_pid66002 00:33:31.512 Removing: /var/run/dpdk/spdk_pid66099 00:33:31.512 Removing: /var/run/dpdk/spdk_pid67509 00:33:31.512 Removing: /var/run/dpdk/spdk_pid69264 00:33:31.512 Removing: /var/run/dpdk/spdk_pid69335 00:33:31.512 Removing: /var/run/dpdk/spdk_pid69405 00:33:31.512 Removing: /var/run/dpdk/spdk_pid69515 00:33:31.512 Removing: /var/run/dpdk/spdk_pid69607 00:33:31.512 Removing: /var/run/dpdk/spdk_pid69703 00:33:31.512 Removing: /var/run/dpdk/spdk_pid69772 00:33:31.512 Removing: /var/run/dpdk/spdk_pid69847 00:33:31.512 Removing: /var/run/dpdk/spdk_pid69951 00:33:31.512 Removing: /var/run/dpdk/spdk_pid70043 00:33:31.512 Removing: /var/run/dpdk/spdk_pid70137 00:33:31.512 Removing: /var/run/dpdk/spdk_pid70207 00:33:31.512 Removing: /var/run/dpdk/spdk_pid70282 00:33:31.512 Removing: /var/run/dpdk/spdk_pid70386 00:33:31.512 Removing: /var/run/dpdk/spdk_pid70478 00:33:31.512 Removing: /var/run/dpdk/spdk_pid70574 00:33:31.512 Removing: /var/run/dpdk/spdk_pid70639 00:33:31.512 Removing: /var/run/dpdk/spdk_pid70714 00:33:31.512 Removing: /var/run/dpdk/spdk_pid70818 00:33:31.512 Removing: /var/run/dpdk/spdk_pid70910 00:33:31.512 Removing: /var/run/dpdk/spdk_pid71000 00:33:31.512 Removing: /var/run/dpdk/spdk_pid71073 00:33:31.512 Removing: /var/run/dpdk/spdk_pid71147 00:33:31.512 Removing: /var/run/dpdk/spdk_pid71221 00:33:31.512 Removing: /var/run/dpdk/spdk_pid71298 00:33:31.512 Removing: /var/run/dpdk/spdk_pid71404 00:33:31.512 Removing: /var/run/dpdk/spdk_pid71490 00:33:31.512 Removing: /var/run/dpdk/spdk_pid71585 00:33:31.512 Removing: /var/run/dpdk/spdk_pid71655 00:33:31.512 Removing: /var/run/dpdk/spdk_pid71729 00:33:31.512 Removing: /var/run/dpdk/spdk_pid71803 00:33:31.512 Removing: /var/run/dpdk/spdk_pid71877 00:33:31.512 Removing: /var/run/dpdk/spdk_pid71975 00:33:31.512 Removing: /var/run/dpdk/spdk_pid72070 00:33:31.512 Removing: /var/run/dpdk/spdk_pid72215 00:33:31.512 Removing: /var/run/dpdk/spdk_pid72489 00:33:31.512 Removing: /var/run/dpdk/spdk_pid72527 00:33:31.512 Removing: /var/run/dpdk/spdk_pid72968 00:33:31.512 Removing: /var/run/dpdk/spdk_pid73156 00:33:31.512 Removing: /var/run/dpdk/spdk_pid73256 00:33:31.512 Removing: /var/run/dpdk/spdk_pid73366 00:33:31.512 Removing: /var/run/dpdk/spdk_pid73408 00:33:31.512 Removing: /var/run/dpdk/spdk_pid73434 00:33:31.512 Removing: /var/run/dpdk/spdk_pid73753 00:33:31.512 Removing: /var/run/dpdk/spdk_pid73994 00:33:31.512 Removing: /var/run/dpdk/spdk_pid74062 00:33:31.512 Removing: /var/run/dpdk/spdk_pid74454 00:33:31.512 Removing: /var/run/dpdk/spdk_pid74598 00:33:31.512 Removing: /var/run/dpdk/spdk_pid75398 00:33:31.512 Removing: /var/run/dpdk/spdk_pid75525 00:33:31.512 Removing: /var/run/dpdk/spdk_pid75701 00:33:31.512 Removing: /var/run/dpdk/spdk_pid75793 00:33:31.512 Removing: /var/run/dpdk/spdk_pid76084 00:33:31.512 Removing: /var/run/dpdk/spdk_pid76332 00:33:31.512 Removing: /var/run/dpdk/spdk_pid76669 00:33:31.512 Removing: /var/run/dpdk/spdk_pid76853 00:33:31.512 Removing: /var/run/dpdk/spdk_pid76950 00:33:31.512 Removing: /var/run/dpdk/spdk_pid77004 00:33:31.512 Removing: /var/run/dpdk/spdk_pid77219 00:33:31.512 Removing: /var/run/dpdk/spdk_pid77248 00:33:31.512 Removing: /var/run/dpdk/spdk_pid77302 00:33:31.512 Removing: /var/run/dpdk/spdk_pid77580 00:33:31.512 Removing: /var/run/dpdk/spdk_pid77828 00:33:31.512 Removing: /var/run/dpdk/spdk_pid78137 00:33:31.512 Removing: /var/run/dpdk/spdk_pid78458 00:33:31.512 Removing: /var/run/dpdk/spdk_pid78782 00:33:31.512 Removing: /var/run/dpdk/spdk_pid79185 00:33:31.512 Removing: /var/run/dpdk/spdk_pid79323 00:33:31.512 Removing: /var/run/dpdk/spdk_pid79410 00:33:31.512 Removing: /var/run/dpdk/spdk_pid79835 00:33:31.512 Removing: /var/run/dpdk/spdk_pid79894 00:33:31.512 Removing: /var/run/dpdk/spdk_pid80218 00:33:31.512 Removing: /var/run/dpdk/spdk_pid80501 00:33:31.512 Removing: /var/run/dpdk/spdk_pid80863 00:33:31.512 Removing: /var/run/dpdk/spdk_pid80974 00:33:31.512 Removing: /var/run/dpdk/spdk_pid81013 00:33:31.512 Removing: /var/run/dpdk/spdk_pid81077 00:33:31.512 Removing: /var/run/dpdk/spdk_pid81133 00:33:31.512 Removing: /var/run/dpdk/spdk_pid81192 00:33:31.512 Removing: /var/run/dpdk/spdk_pid81402 00:33:31.512 Removing: /var/run/dpdk/spdk_pid81495 00:33:31.512 Removing: /var/run/dpdk/spdk_pid81562 00:33:31.773 Removing: /var/run/dpdk/spdk_pid81622 00:33:31.773 Removing: /var/run/dpdk/spdk_pid81658 00:33:31.773 Removing: /var/run/dpdk/spdk_pid81719 00:33:31.773 Removing: /var/run/dpdk/spdk_pid81829 00:33:31.773 Clean 00:33:31.773 06:59:44 -- common/autotest_common.sh@1453 -- # return 0 00:33:31.773 06:59:44 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:33:31.773 06:59:44 -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:31.773 06:59:44 -- common/autotest_common.sh@10 -- # set +x 00:33:31.773 06:59:44 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:33:31.773 06:59:44 -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:31.773 06:59:44 -- common/autotest_common.sh@10 -- # set +x 00:33:31.773 06:59:44 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:33:31.773 06:59:44 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:33:31.773 06:59:44 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:33:31.773 06:59:44 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:33:31.773 06:59:44 -- spdk/autotest.sh@398 -- # hostname 00:33:31.773 06:59:44 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:33:32.119 geninfo: WARNING: invalid characters removed from testname! 00:33:58.744 07:00:09 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:00.655 07:00:13 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:03.204 07:00:15 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:05.751 07:00:18 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:09.057 07:00:21 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:11.601 07:00:23 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:14.144 07:00:26 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:14.144 07:00:26 -- spdk/autorun.sh@1 -- $ timing_finish 00:34:14.144 07:00:26 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:34:14.144 07:00:26 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:14.144 07:00:26 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:34:14.144 07:00:26 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:34:14.144 + [[ -n 5025 ]] 00:34:14.144 + sudo kill 5025 00:34:14.156 [Pipeline] } 00:34:14.172 [Pipeline] // timeout 00:34:14.179 [Pipeline] } 00:34:14.195 [Pipeline] // stage 00:34:14.200 [Pipeline] } 00:34:14.217 [Pipeline] // catchError 00:34:14.228 [Pipeline] stage 00:34:14.230 [Pipeline] { (Stop VM) 00:34:14.245 [Pipeline] sh 00:34:14.536 + vagrant halt 00:34:17.087 ==> default: Halting domain... 00:34:23.715 [Pipeline] sh 00:34:23.999 + vagrant destroy -f 00:34:26.535 ==> default: Removing domain... 00:34:26.809 [Pipeline] sh 00:34:27.093 + mv output /var/jenkins/workspace/nvme-vg-autotest_2/output 00:34:27.103 [Pipeline] } 00:34:27.119 [Pipeline] // stage 00:34:27.124 [Pipeline] } 00:34:27.140 [Pipeline] // dir 00:34:27.146 [Pipeline] } 00:34:27.162 [Pipeline] // wrap 00:34:27.167 [Pipeline] } 00:34:27.182 [Pipeline] // catchError 00:34:27.192 [Pipeline] stage 00:34:27.195 [Pipeline] { (Epilogue) 00:34:27.209 [Pipeline] sh 00:34:27.498 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:34:32.788 [Pipeline] catchError 00:34:32.790 [Pipeline] { 00:34:32.800 [Pipeline] sh 00:34:33.160 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:34:33.160 Artifacts sizes are good 00:34:33.172 [Pipeline] } 00:34:33.187 [Pipeline] // catchError 00:34:33.199 [Pipeline] archiveArtifacts 00:34:33.208 Archiving artifacts 00:34:33.315 [Pipeline] cleanWs 00:34:33.329 [WS-CLEANUP] Deleting project workspace... 00:34:33.329 [WS-CLEANUP] Deferred wipeout is used... 00:34:33.336 [WS-CLEANUP] done 00:34:33.339 [Pipeline] } 00:34:33.356 [Pipeline] // stage 00:34:33.362 [Pipeline] } 00:34:33.376 [Pipeline] // node 00:34:33.382 [Pipeline] End of Pipeline 00:34:33.417 Finished: SUCCESS